Hi.
in linux(include Yellow Dog linux), current programs create threads using 'libspe2' library.
I have a question with scheduling tasks at spu.
there is a running (A) program that has 5 spu threads.
and then the other (B) program start run. that program has 4(or more than) threads.
in this situation, how schedule kernel the threads?
1-1. all threads of (A) program context out, and then all threads of (B) program context in.
1-2. 3 threads of (B) program run, and 1 threads wait.(because there are 3 rest spu.)
1-3. the other method.
if the programs have priority in the same situation,
2-1. all threads of (A) program context out, and all threads (B) program context in. (priority : (A) < (B))
2-2. all threads of (B) program context in (4 threads), and some of low priority (A) program context in (4 threads). (priority : (A) < (B))
2-3. all threads of (A) program run, and all threads (B) program context in. (priority : (A) > (B))
2-4. all threads of (A) program run, and some threads (B) program context in (3 threads). (priority : (A) > (B))
2-5. the other method.
and additional question,
does YDL(or other linux) has a SPU scheduling routine?
thanks.
if program use more spus in cell
If you use spu_stop when the program does not have anything to do ..
you allow the kernel to do a context swap for you
the way i have made the logic in spexms is to use as many as possible at any given time but allow any other program access at any given time.
However in spexms (sourceforge.org )if its the same program it will be dma ing code as data and hence not generate any context swap at all unless some other program is also using the spe's at the same time..
you allow the kernel to do a context swap for you
the way i have made the logic in spexms is to use as many as possible at any given time but allow any other program access at any given time.
However in spexms (sourceforge.org )if its the same program it will be dma ing code as data and hence not generate any context swap at all unless some other program is also using the spe's at the same time..
Don't do it alone.