linux内核是在哪里创建1号进程的?
作者:互联网
1. 请看rest_init的完整代码(不看也没关系,内核版本为5.2, init/main.c)
noinline void __ref rest_init(void) { struct task_struct *tsk; int pid; rcu_scheduler_starting(); /* * We need to spawn init first so that it obtains pid 1, however * the init task will end up wanting to create kthreads, which, if * we schedule it before we create kthreadd, will OOPS. */ pid = kernel_thread(kernel_init, NULL, CLONE_FS); /* * Pin init on the boot CPU. Task migration is not properly working * until sched_init_smp() has been run. It will set the allowed * CPUs for init to the non isolated CPUs. */ rcu_read_lock(); tsk = find_task_by_pid_ns(pid, &init_pid_ns); set_cpus_allowed_ptr(tsk, cpumask_of(smp_processor_id())); rcu_read_unlock(); numa_default_policy(); pid = kernel_thread(kthreadd, NULL, CLONE_FS | CLONE_FILES); rcu_read_lock(); kthreadd_task = find_task_by_pid_ns(pid, &init_pid_ns); rcu_read_unlock(); /* * Enable might_sleep() and smp_processor_id() checks. * They cannot be enabled earlier because with CONFIG_PREEMPT=y * kernel_thread() would trigger might_sleep() splats. With * CONFIG_PREEMPT_VOLUNTARY=y the init task might have scheduled * already, but it's stuck on the kthreadd_done completion. */ system_state = SYSTEM_SCHEDULING; complete(&kthreadd_done); /* * The boot idle thread must execute schedule() * at least once to get things moving: */ schedule_preempt_disabled(); /* Call into cpu_idle with preempt disabled */ cpu_startup_entry(CPUHP_ONLINE); }
2. 从以上代码中可以看到调用了两次kernel_thread, 那么哪个是1号进程?
第一处pid = kernel_thread(kernel_init, NULL, CLONE_FS);即会创建1号进程init
第二处pid = kernel_thread(kthreadd, NULL, CLONE_FS | CLONE_FILES);即会创建2号进程kthreadd
3. 1号进程到底干了些什么?是个死循环吗?
3.1 先展示一下完整代码(init/main.c)
static int __ref kernel_init(void *unused) { int ret; kernel_init_freeable(); /* need to finish all async __init code before freeing the memory */ async_synchronize_full(); ftrace_free_init_mem(); free_initmem(); mark_readonly(); /* * Kernel mappings are now finalized - update the userspace page-table * to finalize PTI. */ pti_finalize(); system_state = SYSTEM_RUNNING; numa_default_policy(); rcu_end_inkernel_boot(); if (ramdisk_execute_command) { ret = run_init_process(ramdisk_execute_command); if (!ret) return 0; pr_err("Failed to execute %s (error %d)\n", ramdisk_execute_command, ret); } /* * We try each of these until one succeeds. * * The Bourne shell can be used instead of init if we are * trying to recover a really broken machine. */ if (execute_command) { ret = run_init_process(execute_command); if (!ret) return 0; panic("Requested init %s failed (error %d).", execute_command, ret); } if (!try_to_run_init_process("/sbin/init") || !try_to_run_init_process("/etc/init") || !try_to_run_init_process("/bin/init") || !try_to_run_init_process("/bin/sh")) return 0; panic("No working init found. Try passing init= option to kernel. " "See Linux Documentation/admin-guide/init.rst for guidance."); }
3.2 简单解读一下代码
到处找那个init程序(在/,/sbin/,/etc/,/bin/等目录下找)并执行,如果没找到直接调用panic
3.3 不是一个死循环哦
4. 那么2号进程kthreadd又干了些什么呢? 它是个死循环吗?
4.1 老规矩,贴上完整代码(代码在kernel/kthread.c中)
int kthreadd(void *unused) { struct task_struct *tsk = current; /* Setup a clean context for our children to inherit. */ set_task_comm(tsk, "kthreadd"); ignore_signals(tsk); set_cpus_allowed_ptr(tsk, cpu_all_mask); set_mems_allowed(node_states[N_MEMORY]); current->flags |= PF_NOFREEZE; cgroup_init_kthreadd(); for (;;) { set_current_state(TASK_INTERRUPTIBLE); if (list_empty(&kthread_create_list)) schedule(); __set_current_state(TASK_RUNNING); spin_lock(&kthread_create_lock); while (!list_empty(&kthread_create_list)) { struct kthread_create_info *create; create = list_entry(kthread_create_list.next, struct kthread_create_info, list); list_del_init(&create->list); spin_unlock(&kthread_create_lock); create_kthread(create); spin_lock(&kthread_create_lock); } spin_unlock(&kthread_create_lock); } return 0; }
4.2 这是个死循环
4.3 简单阐述2号进程干了什么?
不断从全局链表kthread_create_list中获取一个节点,然后执行节点中的函数,这样就可以做到管理调度其它内核线程的功能
标签:kthread,kernel,创建,create,pid,init,内核,linux,kthreadd 来源: https://www.cnblogs.com/dakewei/p/11557716.html