Write your own Einstein@home screensaver

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6542
Credit: 287219771
RAC: 94112

For that matter, does anyone

For that matter, does anyone know the mix of i686 vs. x86_64 as E@H hosts ?? Or even BOINC generally ?

[ At the risk of confusion :

- consider 'i686' as a recent processor built with a 32 bit operand/addressing based upon Intel architecture type ( this began with, say, the 80386 and onwards including manufacturers other than Intel ... beware of considerable other detail here ).

- whereas x86_64 is 64 bit operand/addressing, again based upon the IA64 plan.

- to be exact ( and placing copyright kerfuffles aside ) not all IA32's implementations are identically behaved, nor all IA64's. But there is a common subset of either 'branch' that is used for the purpose of compiling etc to binaries.

- if you have a very specific processor in mind one can build for that too, provided the 'port' exists. For instance there is a gcc for Adapteva Epiphany. ]

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Oliver Behnke
Oliver Behnke
Moderator
Administrator
Joined: 4 Sep 07
Posts: 947
Credit: 25167626
RAC: 10

RE: what I want is the

Message 78342 in response to message 78340

Quote:
what I want is the mingw-w64 product. This will do either 32 or 64 bit Windows builds

Yep, have a look at the BRP build script and Makefiles for details...

Oliver

 

Einstein@Home Project

Oliver Behnke
Oliver Behnke
Moderator
Administrator
Joined: 4 Sep 07
Posts: 947
Credit: 25167626
RAC: 10

RE: does anyone know the

Message 78343 in response to message 78341

Quote:
does anyone know the mix of i686 vs. x86_64 as E@H hosts

Unfortunately that isn't tracked in the project DB but could be extracted from the scheduler logs, which I want to avoid for this purpose. However, in the case of Einstein@Home we can say that we're clearly 64 bit dominated because of the large share of institutional Linux clusters that support us. But also the Windows fraction should mostly run 64 bit versions by now. That should be even more amplified on the OS X side of things as Apple users typically upgrade their OS more often, in particular since Mountain Lion (OS X available for free).

Best,
Oliver

 

Einstein@Home Project

Claggy
Claggy
Joined: 29 Dec 06
Posts: 560
Credit: 2694028
RAC: 0

RE: RE: does anyone know

Message 78344 in response to message 78343

Quote:
Quote:
does anyone know the mix of i686 vs. x86_64 as E@H hosts

Unfortunately that isn't tracked in the project DB but could be extracted from the scheduler logs, which I want to avoid for this purpose.


Projects with newer Boinc Server software have an Average computing column on the applications page, which might show the gist of what you're asking for:

Setiathome applications

Albertathome applications

(Einstein's app page doesn't have it, so looking at other projects should give you a rough idea)

Claggy

Oliver Behnke
Oliver Behnke
Moderator
Administrator
Joined: 4 Sep 07
Posts: 947
Credit: 25167626
RAC: 10

Those are derived from the

Message 78345 in response to message 78344

Those are derived from the plan classes and don't give you any usage statistics in terms of host numbers. Again, they're referenced in the scheduler.log and could be counted there...

Oliver

 

Einstein@Home Project

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4273
Credit: 245297889
RAC: 11784

RE: RE: RE: does anyone

Message 78346 in response to message 78344

Quote:
Quote:
Quote:
does anyone know the mix of i686 vs. x86_64 as E@H hosts

Unfortunately that isn't tracked in the project DB but could be extracted from the scheduler logs, which I want to avoid for this purpose.


Projects with newer Boinc Server software have an Average computing column on the applications page, which might show the gist of what you're asking for:

Setiathome applications

Albertathome applications

(Einstein's app page doesn't have it, so looking at other projects should give you a rough idea)

Claggy

This doesn't help much. For the Gamma-Ray search there is no 64 Bit (Windows) app, there is no way to tell whether (or to which share) the 32 Bit App version is ran on a 32 or 64 Bit system.

An this btw. is true also for the scheduler logs: you can tell work for which platform was assigned to a host, but not whether this platform is the "native" platform of this host. There are "native" 32 Bit systems that can run 64 Bit applications as well (like Mac OSX 10.5 "Leopard").

BM

BM

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6542
Credit: 287219771
RAC: 94112

RE: RE: what I want is

Message 78347 in response to message 78342

Quote:
Quote:
what I want is the mingw-w64 product. This will do either 32 or 64 bit Windows builds

Yep, have a look at the BRP build script and Makefiles for details...


Thanks Oliver ! That looks very useful indeed .... I will pore over that! :-)

Thanks for the answers. As for the 32/64 host ratio, I was just curious to see if there was an off-the-shelf answer. No biggie if this is not known. From an app's point of view it's whether the relevant 32 bit interfaces exist on the target ( regardless of detailed implementation ).

[ So my apps are cool provided they have the right libs & links. If it ever matters in the future then both app types can be provided via build switches. ]

The dynamic requirements are pretty straightforward :

- C runtime obviously.

- it will still run if no BOINC [ init fails -> standalone ], but then obviously without any WU and user info etc.

- SDL/GLEW/GL etc requirements established on app startup - GO/NO-GO ...

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Oliver Behnke
Oliver Behnke
Moderator
Administrator
Joined: 4 Sep 07
Posts: 947
Credit: 25167626
RAC: 10

RE: - SDL/GLEW/G etc

Message 78348 in response to message 78347

Quote:
- SDL/GLEW/G etc requirements established on app startup - GO/NO-GO ...

Roadmap for v0.2: try to build it as static as possible to minimize external dependencies (at least SDL).

Oliver

 

Einstein@Home Project

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6542
Credit: 287219771
RAC: 94112

[pre]Spring has sprung the

[pre]Spring has sprung
the grass has rizz
I wonder where this project is ? :-)[/pre]
Heartbeat : thump ! :-0 :-)

No, this project ain't dead. Yes, I know you've heard that before. :-)

What an extra-ordinary winter ( of our discontent ) where I've been unable to have prolonged undistracted time to think. Having finally gained some requisite continuity, I have managed to cut another Gordian Knot ( not the first, see here ) and have the graphics pipeline as a single cognitive/programmable entity. Basically this is courtesy of a 'good' abstraction in the form of the right C++ class.

The user of my ogl_utility library will 'only' need to :

- include the relevant headers

- be familiar with three relatively simple structures

- populate instances of those structures at run time

- naturally need to know how to write GLSL ( the code for the shaders, I can't automate that )

- present those structures when creating an instance of the RenderTask class. This object is intended to be persistent for the duration of program execution, say placed on the heap.

So that per rendering frame :

- one renews the values of certain ( uniform ) variables, like a transform matrix representing camera position/scaling/rotation/perspective.

- trigger the rendering. This could be an entire set of the supernovae points as arranged in the Starsphere.

Hence during program execution there will be a relatively longer startup phase where the ducks are created, lined up and loaded into the OpenGL state machine. From then on however the animation will be quick ( or quack ? ) and smooth, a pretty short per-frame rendering loop in fact. The devilish detail has been in setting up the pipeline with ready made swap-in/swap-out entities. Here is the relevant class definition :

[pre]class RenderTask {
public :
struct shader_group {
const std::string vert_shader_source;
const std::string frag_shader_source;
Program::shaderDisposition disposition;
};

struct index_buffer_group {
const GLvoid* buffer_data;
GLuint bytes;
GLuint indices;
GLenum usage;
GLenum index_type;
};

struct vertex_buffer_group {
const GLvoid* buffer_data;
GLuint bytes;
GLuint vertices;
GLenum usage;
VertexBuffer::data_mix mix;
};

/**
* \brief Constructor.
*
* \param s_group : a shader_group structure that specifies the key parameters
* to construct Shader objects for this rendering task.
* \param i_group : an index_buffer_group structure that specifies the key parameters
* to possibly construct an IndexBuffer object for this rendering task.
* \param v_group : a vertex_buffer_group structure that specifies the key parameters
* to construct a VertexBuffer object for this rendering task.
*/
RenderTask(RenderTask::shader_group s_group,
RenderTask::index_buffer_group i_group,
RenderTask::vertex_buffer_group v_group);

/**
* \brief Destructor.
*/
virtual ~RenderTask();

/**
* \brief Add another correspondence between vertex buffer and the vertex shader.
*
* \param spec : an attribute specification as defined in the AttributeInputAdapter class.
*/
void addSpecification(const AttributeInputAdapter::attribute_spec& spec);

/**
* \brief Create a correspondence between a uniform variable, as known
* by an OpenGL program object, and a position within client code.
* \param u_name : the name of the uniform variable.
* \param source : an untyped pointer to client code where the value
* may be uploaded from.
*/
void setUniformLoadPoint(std::string u_name, GLvoid* source);

/**
* \brief utilise this task ie. trigger rendering as per setup.
*/
void utilise(GLenum primitive, GLsizei count);

/**
* \brief Acquire the OpenGL state resources for this task.
*/
void acquire(void);

private:

AttributeInputAdapter* m_attrib_adapt;
FragmentShader* m_frag_shader;
IndexBuffer* m_index_buffer;
Pipeline* m_pipeline;
Program* m_program;
VertexBuffer* m_vertex_buffer;
VertexFetch* m_vertex_fetch;
VertexShader* m_vertex_shader;

};[/pre]

shader_group supplies the GLSL code to be inserted into the graphics pipeline, while index_buffer_group and vertex_buffer_group supply low level detail on the data inputs to the pipeline.

However this exposes very little of the underlying library ! It is unfortunate that some low level detail - the data types in the three structs - have to be there upfront. For those not in the know ( and I don't expect readers to trawl the remainder of this thread ), the problem to solve is that the OpenGL interface has dramatically morphed from a high level graphical thinking approach ( ~ 2000 ) to a much lower level generic hardware driver interface ( ~2009+ ). So it is inevitable that some dirty detail is present, mostly relating to data format/packing choices made by a library user. I believe I have the minimal set.

As for the addSpecification and setUniformLoadPoint methods it is the responsibility of the library user to nominate correspondences b/w their client code and the state machine 'loading bays' that I have provided. I can't usefully second guess any better than that without crippling variety. As an example here is a vertex shader ( the language is GLSL ) :

[pre]#version 150

// This is a vertex shader. Single color as uniform.

in vec2 position;

out vec3 pass_color;

uniform mat4 RotationMatrix;
uniform vec3 color;

void main()
{
gl_Position = RotationMatrix * vec4(position, 0.0, 1.0);
pass_color = color;
}[/pre]
this is very C/C++ like in syntax.

The 'position' variable is loaded per vertex/point in this case ( very many times per-frame in the case of the supernovae display ). The addSpecification method creates the desired correspondence on the pipeline input end for these.

The setUniformLoadPoint method would indicate that the 'color' and 'RotationMatrix' values need refreshing ( from a client side location ) for each frame.

These bindings are generated during the duck setup phase, for instance what follows is the Starsphere::make_snrs() code [ where m_render_task_snr is a persistent pointer to a heap based instance of RenderTask ] :

[pre]void Starsphere::make_snrs() {
static glm::vec3 snr_color = glm::vec3(0.7, 0.176, 0.0); // Supernovae are Sienna.

GLfloat vertex_data[NSNRs * 3];

// GLfloat mag_size=3.0;

for(int i=0; i std_string(),
factory.createInstance("FragmentShader_Pass")->std_string(),
Program::KEEP_ON_GOOD_LINK};

RenderTask::index_buffer_group i_group1 = {NULL, 0, 0, 0, 0}; // With no index data remaining fields irrelevant.

RenderTask::vertex_buffer_group v_group1 = {vertex_data,
sizeof(vertex_data),
NSNRs,
GL_STATIC_DRAW,
VertexBuffer::BY_VERTEX};

m_render_task_snr = new RenderTask(s_group1, i_group1, v_group1);

m_render_task_snr->addSpecification({0, "position", 3, GL_FLOAT, GL_FALSE});

m_render_task_snr->setUniformLoadPoint("color", &snr_color);

m_render_task_snr->setUniformLoadPoint("RotationMatrix", &m_rotation[0][0]);

m_render_task_snr->acquire();

return;
}[/pre]

The magic occurs with the final acquire() invocation. This is where the remainder of the ogl_utility library kicks in, and hides an utter blizzard of detail especially all resource acquisition, the "business rules" and order/sequence dependencies within the OpenGL state machine. There is a myriad of potential error messages generated if need be, you will be fed back on literally every possible fault that I could imagine. Perhaps in a later version this could be subdued, but so far : paranoia works ! In the Starsphere::render() method ( a regularly triggered callback ) there is one call to invoke actual rendering activity :

[pre]m_render_task_snr->utilise(GL_POINTS, NSNRs);[/pre]
and the supernovae appear ! Muhahahaha ...... :-) :-)

[ Stay tuned for reasonable test versions .... ]

Cheers, Mike.

( edit ) Exceptionally sharp punters may notice that I have 'flattened' the supernovae vertices from three to two component vectors. This is test code of course, but ( inadvertently ) demonstrates the fine control one can have will little effort ! Geez Louise I sound like I am selling a vacuum cleaner .... :-)

( edit ) I should re-state that the vertex data ( the user defined point-by-point geometric detail of what you want rendered on the screen ) is loaded once onto the video card in ( essentially ) array form and typically ought be kept for the duration of the program. Suitably verified GLSL code, which we call 'shaders', is also loaded once and likewise kept. Per-frame the library will manage swapping in/out of currency ( that is, tell the state machine to draw data from it in order to feed the pipeline ) any number of such data blocks ( with associated shader code ) as needed to complete some collection of rendering tasks. So there will be a RenderTask object creation for each of say : supernovae, pulsars, constellations, etc. In a higher level rendering loop one just calls the utilise() method for each until you have finished drawing a frame.

Well, the default library behaviour is to leave all suitable entities on-card once loaded. You can unload if you like .....

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Oliver Behnke
Oliver Behnke
Moderator
Administrator
Joined: 4 Sep 07
Posts: 947
Credit: 25167626
RAC: 10

RE: Geez Louise I sound

Message 78350 in response to message 78349

Quote:
Geez Louise I sound like I am selling a vacuum cleaner .... :-)


For me this pretty much works :-D

 

Einstein@Home Project

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.