2 camera anytime soon? Any alternatives?

Well, that eliminates that. So it is either SGP or someone maybe can write an app that coordinates two SGP instances. Sigh.

I think the demand for SW that does this is being seriously underestimated. It reminds me of years back when several MaxIm users (including me) were trying to get them to add graphing to their guiding. Years went by with no interest until am imager buddy wrote a plugin that did just that. It was extremely popular and not long afterward, graphing showed up as a native feature.

I did look at this but the API doesn’t support that kind of functionality (yet). The only other way i could think of would have be via events but unfortunately events cannot be triggered before/ after each image, only for lines in a sequence which rules that idea out. I would be happy to spend some time on this but at the moment I can’t see a way to make it work.

Thanks! Yes, I think that a few changes to the API would indeed let others do what most of us two-camera supporters are hoping for.

I also wonder if the developers may think that we are asking for more than we (or at least I) really are. Here is what I envision:

  1. Master and slave systems/instances, there could be more than one slave system, although more than one would probably be rare.

  2. Master system would work exactly as SGP/PhD presently do with regard to all functions - the only extra thing the master would need to do is to be able to tell the slave systems when it starts a main (as opposed to focus or plate solve exposures) exposure and how long that exposure will be.

  3. Slave system would only do three things:
    a) Change filters as required
    b) Take and download and save it’s images as set
    c) Take focus exposures and perform focus as set

Of the above, only slave exposures (main and focus) would need to avoid mount movement (dithers, plate solves/micro slews, target changes, and flips). Downloads and filter changes are unaffected by mount movements, of course. The simple way to do that is for the slave systems to use only somewhat shorter exposures than the master system and only start those if the master system is currently taking it’s own (longer) main exposure (as opposed to a focus or plate solve exposure). It would be up to the user to insure that the slave exposures were always a bit shorter (maybe 10-20 percent) than the masters so that they would “fit” within the master exposure. I do not see this as a big deal since most of the time, the master and slave systems will need different exposures anyway due to filters, camera sensitivity, and f ratio differences.

It is certainly not possible to double one’s output of images with two cameras since too many things have to happen. It is probably possible to get maybe a 150% or more increase in images as opposed to one camera (assuming two slaves, more if there are three or more). That is still a lot!

I, for one, would be quite happy with nothing more than the above - at least to begin with and quite possibly permanently - it is more than any other SW offers!

NOTE: One thing I would also suggest is that the available dither magnitudes be expanded to a larger range since differing focal lengths between master and slave systems would mean that the availabile dither scales for the master system might be not enough for the slave or vice-versa. I have suggested this already in another thread and this would benefit more than just multi-camera systems

I think the number of people wanting multi camera operation is very small. Probably no more than the half dozen who have posted about it.

Part of the problem is that the people asking for this are grossly underestimating the complexity of this. For example CCDMan’s description of what is needed dismisses the major functionality - synchronising the multiple image acquisition processes - with virtually no mention.

This synchronisation will be difficult to implement. It will need something like a state machine where there’s a central state and each process has to monitor the state, send requests for a state change, then wait for the state to be appropriate to what is needed. Adding all this will need a major rewrite of the application, it’s effect will be very pervasive. If it goes wrong - and it will - then it’s quite possible that everyone will be affected, not just those using multiple cameras.

So you’re saying that the third most requested feature for sgp is polled with only 6 people wanting it? Sounds like a walk back now.

It doesn’t matter how many times you ask for it, you are still only one person.

Not what I asked. There are only six people that asked for this feature according to the poll? When I look at the poll it is showing 25% of respondents asked for this. I don’t think I’ve asked for this feature more than others.

If you don’t plan on offering this feature for whatever reason, then just say so and be done with it.

That sounds right to me. From a strictly user POV, I just want to know if I can still look forward to this or whether it is a dead issue. This affects planning as well as equipment decisions so is important information.

Is the feature dead or is it alive? Just tell us.

As far as what the real world demand is, right now that is nothing more than opinion. One can argue that forever with no certainty. Much like the guide graph in MaxIm that I mentioned above - one never really knows what demand is until someone actually has it available.

As for me, I will stop beating what increasingly does appear to be a dead horse.

@Chris:
In general I fully support your ‘less is more’ approach, there is no point in chasing down lots or difficult features if stability and usability suffers. I know a lot of work has already gone into v3 to stabilise SGP and to put it onto a more maintainable and viable footing. I think though that there could come a time where bigger features might be a possibility and it’s obviously up to the developers to judge what makes sense in terms of cost/benefit. I think you are a bit harsh here as there genuinely seems to be interest in this, I know the last feature poll might not be very scientific but it did rank quite high on the list which included the votes of 140 people. Still not a massive number in real terms but certainly more than you suggest. Also if you look back at post around this topic there were some very encouraging messages from the developers themselves which obviously got hopes up, and you can’t really fault people from following it up.

I also don’t think that it necessarily would have to be very sophisticated or difficult to implement if we are only looking at coordinated dithering (no coordination for focus, mount move, centering, flips etc.) and a strict master/slave setup. The master just does it’s thing without regard to the slave instance with the only exception that the dither command can be delayed (optionally even) if the slave instance has an ongoing exposure. The slave instance would only have to be able to check if the master instance is dithering or not (and maybe if the master instance has stopped it’s sequence). Yes there will be dropped subs and you would need to think carefully about exposure times of the slave instance to avoid wasting dead time but those things could be improved on slowly. A post in the last poll seems to indicate that the developers are thinking along similar lines:

https://forum.mainsequencesoftware.com/t/sgpro-feature-request-poll-vote-now/4061/21?u=mike

Having said all that (as you know from your own work) there would probably be some people (including myself) who would be happy to give up time for free to make this happen if it was possible to coordinate this externally. I haven’t looked at the ins and outs of it in detail but a possible approach could be to implement ‘pre and post exposure’ and ‘pre and post’ dithering events which would allow an external app to coordinate the dithering. This way there would be no development or support burden for the developers and the few of us badly wanting this would stop moaning about it :slight_smile:

@Jared & @Ken:
Would you consider implementing (or even just considering) some smaller changes and maybe some small cooperation (as and when time allows, entirely at your discretion) to allow me to create an external app to achieve coordinated dithering between two instances of SGP?

Thanks!

Mike

1 Like

Add me so 7 users :wink:
I have side by side setup and multiple cameras. The most I can hope for is start a sequence then using another ap, pull some wide field/color shots for fun manually.

Don’t hold your breath for a requested feature. (they never happen).

Prism is superb, but I’ve not switched to it yet, I have some issues at this time and I’m to busy acquiring data to work on them. It’s amazing in what it can do… it’s also amazing in what it can’t do. I’ve found no way to add gain/exposure data to the filenames of subs. (Pretty basic feature).

Remember chaps I am not one of the SGP developers. I have no control over what Ken and Jared do.

All I can do is say what I think about this, and it’s my opinion that this is far more complex than you realise. For example if you are dithering you can’t be imaging or focusing in any acquisition chain. Dither will spoil an image and upset focusing.
qw
Expecting users to manage their imaging in a way that assumes that one imaging chain has some sort of priority will at best delegate synchronising problems to the user. It will generate unending support problems.

The shortcuts suggested to get started will prove to be unworkable.

The fact that no one has implemented a successful product with this should be a big clue that indicates that this is more difficult than you think.

I’m basing my opinion on over 30 years in software development for a scientific instrument manufacturer. This is exactly the sort of automation product that we would get requests for, that on the face of it looks easy to implement. I was on teams implementing some of them and the only reliable way was to keep them simple. If they weren’t then the work and complexity went through the roof. We had, by comparison, unlimited resources, teams of developers working full time, with separate documentation and marketing people.

I’m sorry to be so negative about this, but I think it’s better to be realistic about how I see what the prospects for this are.

I appreciate the honesty and the level of difficulty that is involved with this. It’s still a great program and has alleviated a lot of headaches that caused me to leave the hobby in 2011. Love the mosaic end of the deal.
Hope there is a solution, but if it is not possible, I’m not going to jump out a window over it.

I would generally agree. I really do like SGP much better than the old school systems like MaxIm that I had used since the mid 90’s and certainly do not plan on dumping it entirely. Having said that, if there is other SW, now or in the future, that allows good two camera function and SGP does not, I would use that instead of SGP for those situations. Just common sense, really.

I really think that the best solution might be to work with those user-coders that are interested in implementing this in an external software.

So I guess what the multi camera users like me really want to know at this point is what the plans are regarding two camera function:

  1. We have given up this idea - not ever gonna happen
  2. We may do it at some point but don’t hold your breath
  3. We will add functionality to allow external programs to manage this and plan to do this in the next (time frame)

Thanks

A possible alternative:
I have described in detail in a couple of other threads my use of 2 and 3 scopes/cameras all operating independently with no dithering performed by the controlling SGP program. It is very efficient producing close to 200% effective imaging for a 2 camera setup. The only real lost images are the slave images being taken when a meridian flip occurs, or a target change occurs. This has worked wonderfully for me. Here is a way to introduce dithering:

Assume a single target imaging for 3 hours. Normally you would define 1 single target that would take all the images for the 3 hour session on each camera.

Instead, setup 6 targets, each target with appropriate start/stops times so they follow each other in order. All six targets of course use the same basic coordinates, only differing enough to produce the desired dither. Each target would need to Slew and Center, but after the first would be fairly fast. The slave targets would include an initial delay to guarantee they started their first image after the Master finished the Slew/Center.

Not particularly elegant, and definitely a nuisance to program, but should work.

I have indeed thought about that as a workaround but it just seemed a bit of a kludge so have not done it. If one could have some sort of input script to do that for a given object, it might be more practical, but that also would need changes to SGP, if I am not mistaken. Plus, to paraphrase “Jaws” - “We’re gonna need a bigger input box”.:wink:

One could similarly use start/stop times coordinated between instances to synch the two instances -
but that is equally kludgy.

Hi @Chris,

Again I think your input is good and I don’t doubt the experiences you’ve had in your line of work - these are good reminders to stay focussed on what is important and not get sucked down the rabbit hole.

I do think though that maybe you underestimate the level of understanding that people might have, especially those who end up with a dual scope setup. Yes it won’t be for everyone but these synchronisation problems are already being dealt with in various ways and I think the general impact on dithering in a multi scope setup is well understood. Also there are applications which offer this, APT being the obvious one which implements a master/multi slave system quite well (although it has other shortcomings).

Taking everything into account I would say that I agree with you that a comprehensive implementation of multi scope imaging is going to be very complex, not cost effective and for those reasons won’t happen any time soon - if ever. I don’t agree with this though:

I don’t think anyone, with maybe the exception of the developers, would be in a position to make such a call yet. In any case why not leave it up to them?

Genius! Could get messy with mosaics though :smile:

Maybe on a surface it seems so, but not everyone is an active forum participant. If people don’t participate in a pool, it doesn’t mean they are not interested in this feature. It always like, somebody else is taking care of it, why should I bother? There are a sensible percentage of users who would greatly appreciate a second camera control feature. And they are ready to pay extra for that. Well, most likely they are, because if you can control properly a second camera, it means you can save money on a second computer and also be more proficient with your image acquisition and be at easy when you are doing so, which is very important for all of us. Anyway, it would be fantastic if our favourite SGPro developers decide to implement this feature!

1 Like

@mtau

Guilty as charged and i think you are right, there are quite a lot of people out there who would love this functionality and would be prepared to pay for it. I am not really a forum person but I do run a triple widefield rig using the three instances of SGP on different old laptops. I would definitely be in support of a multi function software and would view that development both as a benefit to me and others using multi systems, but also as an investment in that SGP would be leaps ahead of its competitors. Once you get to two, the “speed” of capture is mesmerising and I have little doubt that many more people would follow. I know nothing of programming so have to take everyone’s word for it that this is difficult to implement in a form that is flexible enough for all the different systems out there. I suspect that people like me who are trying to work with multi systems are finding their own ways but I would say that double or triple “speed” systems are real weather beaters and would strongly recommend them once you get your head around the cable complexity!

I used to use the software in Maxim 5 that CCDMan refers to, quite successfully actually, but the master/slave system that this used was linked to specific computers and in theory using non-licensed multiple instances of Maxim. Once M6 came out i abandoned that and have been SGP ever since after being persuaded to look.

I don’t do anything special with my captures. A very good imager always encouraged to go after as many frames as possible so I tend to go for just one object a night/multiple nights. I use an unguided mount which just makes life easier and i point the main central system at whatever target. The secondary systems either side are pointed very close, but not at exactly the same point, and i set similar runs in terms of timecand numbers of frames. This is often L or Ha in the central system and L or RGB on either side, or O and S, or H and H or whatever - flexibility is everything to me. Once the central system has plate solved i press run sequence in the other two systems (obviously no mount is connected on these) and walk away once the first frame is in on each computer. I don’t dither at the moment as I have found that when i combine eg L frames from different systems, the noise largely reduces to very acceptable levels. I accept that when there is a flip i will potentially lose the frame from either side system. Reality is that this is rare and by manually synchronsing the start times so they are all running focus prior to capture, the frames do tend to end more or less together. I haven’t really messed around with imposing delays etc and apart from maybe adding an extra frame in the secondary systems i rely on gut feel and occasionally calculations for timings.

Just my tuppence-worth. I’ll go back to my hole ;-). I don’t see this as clunky, just an acceptable and occasional loss of data which is more than made up for by doubling or tripling the “speed” of capture. Linking and coordinating everything together however would be great.

Elaborating on my post above of using multiple targets of the same target with a slight offset on each one.
If you are imaging the same target over several nights, you can easily get an automatic dither by the simple fact that the center process is likely to be a random offset of several pixels each night. This assumes your system is not set up to insist on a 1 or 2 pixel accuracy for the center operation. By setting your requested accuracy to perhaps around 20, you should have a good random distribution. If you have imaged the target over 5 or 6 nights, you will automatically have a decent dither for the full collection of images.

For 1 or 2 nights of imaging, you could also accomplish this without the need for altering the individual target coordinates by interspersing a slew far enough away from the target to produce a random center on return. That target for the slew would not need to do a center, so would be very fast.