* ggml : add RPC backend The RPC backend proxies all operations to a remote server which runs a regular backend (CPU, CUDA, Metal, etc). * set TCP_NODELAY * add CI workflows * Address review comments * fix warning * implement llama_max_devices() for RPC * Address review comments * Address review comments * wrap sockfd into a struct * implement get_alignment and get_max_size * add get_device_memory * fix warning * win32 support * add README * readme : trim trailing whitespace * Address review comments * win32 fix * Address review comments * fix compile warnings on macos
		
			
				
	
	
		
			2 lines
		
	
	
	
		
			95 B
		
	
	
	
		
			CMake
		
	
	
	
	
	
			
		
		
	
	
			2 lines
		
	
	
	
		
			95 B
		
	
	
	
		
			CMake
		
	
	
	
	
	
| add_executable(rpc-server rpc-server.cpp)
 | |
| target_link_libraries(rpc-server PRIVATE ggml llama)
 |