Share and Share Alike

  • Published on

  • View

  • Download

Embed Size (px)


Using System V Shared Memory in MRI Ruby Projects


<ul><li> 1. Share and Share AlikeUsing System V shared memory constructs inMRI Ruby projects </li> <li> 2. Who Am I? Jeremy Holland Senior Lead Developer at CentreSource in beautiful Nashville, TN Math and Algorithms nerd Scotch drinker @awebneck,, freenode: awebneck, etc. </li> <li> 3. The Problem FREAKIN HUGE BINARY TREE </li> <li> 4. How huge? Huge. Millions of nodes, each node holding ~500 bytes e.g. Gigabytes of data K-d tree of non-negligible dimension (varied, around 6-10) No efficient existing implementation that would serve the purposes needed Fast search Reasonably fast consistency </li> <li> 5. Things we considered ...and discarded Index the tree, persist to disk Loading umpteen gigs of data from disk takes a spell. Reload it for each query WAY TOO SLOW </li> <li> 6. Things we considered ...and discarded Index once and hold in memory Issues both with maintaining index consistency and balance Difficult to share among many processes / threads without duplicating in memory. </li> <li> 7. Things we considered ...and discarded DRb Simulates memory shared by multiple processes, but not really While the interface to search the tree is available to many different processes, actually searching it takes place in the single, server-based process </li> <li> 8. Enter Shared Memory Benefits Shared segment actually accessible by multiple, wholly separate processes Built-in access control and permissions Built-in per-segment semaphore Drawbacks With great power comes great responsibility Acts like a bytearray manual serialization </li> <li> 9. Ruby-level memory paradigm vs C-level memory paradigm Ruby: Everything goes on the heap Garbage collected - no explicit freeing of memory C: Local vars, functions, etc. on the stack Explicit allocations on the heap (malloc) Explicit freeing of heap no GC </li> <li> 10. Ruby Before start of process </li> <li> 11. Ruby Process starts Heap begins to grow </li> <li> 12. Ruby Process runs Heap continues to grow with additional allocations </li> <li> 13. Ruby Process runs GC frees allocated memory no longer needed... </li> <li> 14. Ruby it can be reallocated for new objects </li> <li> 15. Ruby Process ends Heap freed </li> <li> 16. C Process starts Stack grows to hold functions, local vars </li> <li> 17. C Process runs Memory is explicitly allocated from the heap in the form of arrays, structs, etc. </li> <li> 18. C Process runs A function is called, and goes on the stack </li> <li> 19. C Process runs The function returns, and is popped off the stack </li> <li> 20. C Process runs The item in the heap, no longer needed, is explicitly freed </li> <li> 21. C Process runs A new array is allocated from the heap </li> <li> 22. C Process ends (untidily) The stack and heap are reclaimed by the OS as free </li> <li> 23. TRUTHRuby itself has no concept of shared memory. </li> <li> 24. TRUTH C does. </li> <li> 25. Shared Memory A running process (as viewed from the C level) </li> <li> 26. Shared Memory A shared segment is created with an explicit size like allocating an array </li> <li> 27. Shared Memory The segment is attached to the process at a virtual address </li> <li> 28. Shared Memory Yielding to the process a pointer to the beginning of the segment </li> <li> 29. Shared Memory A new process starts, wishing to attach to the same segment. </li> <li> 30. Shared Memory It asks the OS for the identifier of the segment based on an integer key Are you there? Yup! </li> <li> 31. Shared Memory ...and attaches it to itself in fashion similar to the original. </li> <li> 32. Shared Memory Both processes can now - depending on permissions read and write from the segment simultaneously! </li> <li> 33. Shared Memory The first process finishes with the segment and detaches it. </li> <li> 34. Shared Memory And thereafter, ends. </li> <li> 35. Shared Memory ...leaving only the second process, still attached </li> <li> 36. Shared Memory Now, the second process detaches... </li> <li> 37. Shared Memory ...and subsequently ends </li> <li> 38. Shared Memory Note that the shared segment is still in persisted in memory Can be reattached to another process with permission to do so </li> <li> 39. Shared Memory Later, a new process comes along and explicitly destroys the segment, all processes being finished with it. </li> <li> 40. How its done: Configuration Precisely how much memory can be drafted into service for sharing purposes is controlled by kernel parameters kernel.shmall the maximum number of memory pages available for sharing (should be at least ceil(shmmax / PAGE_SIZE)) kernel.shmmax the maximum size in bytes of a single shared segment kernel.shmmni the maximum number of shared segments allowed. </li> <li> 41. How its done: Configuration To view your current settings: </li> <li> 42. How its done: Configuration Or... </li> <li> 43. How its done: Configuration Setting the values temporarily can be accomplished with sysctl... </li> <li> 44. How its done: Configuration ...or more permanently by editing /etc/sysctl.conf </li> <li> 45. How its done: Creating New and Acquiring Existing Segments intshmget(key_tkey,size_tsize,intshmflag) key_t key: integer key identifying the segment or IPC_PRIVATE size_t size: integer size of segment in bytes (will be rounded up to next multiple of PAGE_SIZE) int shmflag: mode flag consisting of standard o- g-w and IPC_CREAT (to create or attach to existing) and optionally IPC_EXCL (to throw an error if it already exists) </li> <li> 46. How its done: Creating New and Acquiring Existing Segments intshmget(key_tkey,size_tsize,intshmflag) Returns: valid segment identifier integer on success, or -1 on error </li> <li> 47. How its done: Attaching segments void*shmat(intshmid,constvoid*shmaddr, intshmflag) shmid: integer identifier returned by a call to shmget shmaddr: Pointer to the address at which to attach the memory. Almost always want to leave this NULL, so that the system will address the segment wherever theres room for it. </li> <li> 48. How its done: Attaching segments void*shmat(intshmid,constvoid*shmaddr,int shmflag) shmflag: several flags for controlling the attachment most importantly, SHM_RDONLY (what it looks like) returns: a void pointer to the start of the attached segment, or (void *)-1 on error </li> <li> 49. How its done: Detaching segments intshmdt(constvoid*shmaddr) shmaddr: Pointer returned by the call to shmat returns: 0 or -1 on error </li> <li> 50. How its done: Getting segment information intshmctl(intshmid,intcmd,structshmid_ds *buf) shmaddr: The identifier returned by shmget cmd: The command to execute for this purpose, IPC_STAT Buf: A shmid_ds struct </li> <li> 51. How its done: Getting segment informationstructshmid_ds{structipc_perm;permissions/ownershipsize_tshm_segsz;sizeofsegmentinbytestime_tshm_atime;lastattachmenttimetime_tshm_dtime;lastdetachmenttimetime_tshm_ctime;lastchangetimepid_tshm_cpid;pidofcreatorpid_tshm_lpid;pidoflastattachedshmatt_tshm_nattch;#ofattachedprocesses} </li> <li> 52. How its done: Destroying...</li></ul>


View more >