LLM Inference Optimization Engineer
Preferred Networks · AI
unknown
Salary Range (USD)
Negotiable
Location
Tokyo, Japan
Visa Support
Supported
Funding Stage
Unknown
Job Responsibilities
- • Improve the inference engine powering our API service and maintain PLaMo implementations in open source projects such as vLLM.
Engineering Culture & Tech Stack
Raw Post
Show original text
Preferred Networks | Tokyo or Remote in Japan | Full-time | https://www.preferred.jp/en
Preferred Networks is an AI company based in Tokyo working across the stack, from AI chips and computing infrastructure to LLMs and products. You may already know us indirectly if you've used software we've built, such as Optuna or CuPy (or Chainer, back in the day).
We are designing in-house chips (MN-Core series: https://mn-core.com/) and training LLMs (PLaMo series: https://huggingface.co/pfnet), and our team is actively hiring for two roles related to these endeavors:
- MN-Core LLM Serving Engine Engineer: Build software infrastructure to serve LLMs using our upcoming inference accelerator, MN-Core L1000. [Apply here: https://open.talentio.com/r/1/c/preferred/pages/121580]
- LLM Inference Optimization Engineer: Improve the inference engine powering our API service and maintain PLaMo implementations in open source projects such as vLLM. [Apply here: https://open.talentio.com/r/1/c/preferred/pages/119173]
Both roles require relocation to Japan. We are happy to provide visa and relocation support.
AI Risk Insights
No major risk signals detected.
Recent News
No recent updates
Data Source
Content parsed by LLM from Hacker News raw data. Confidence:HIGH