[SOLVED] CUPS fails to start/install
Systemd is an init system and system manager that is widely In systemd, the target of most actions are "units", which are resources that systemd knows how to manage. . Stack posavski-obzor.infoe loaded active running D-Bus System Message Bus . Failed to start posavski-obzor.infoe: Unit posavski-obzor.infoe is masked. So grab a cup of coffee, sit down, and read what's coming. As mentioned, the central responsibility of an init system is to bring . Another example: we start D- Bus and several clients at the same time. . about the relation of the new process to the one it actually started. .. You Want to See This in Action?. If your CUPS is still installed, it sounds like your initscript is borked. Did you manually modify your CUPS startup script or the LSB headers?.
How to Determine and Fix Boot Issues in Linux
System State Overview The commands so far have been useful for managing single services, but they are not very helpful for exploring the current state of the system.
There are a number of systemctl commands that provide this information. Listing Current Units To see a list of all of the active units that systemd knows about, we can use the list-units command: The output will look something like this: The output has the following columns: The systemd unit name LOAD: Whether the unit's configuration has been parsed by systemd.
The configuration of loaded units is kept in memory. A summary state about whether the unit is active. This is usually a fairly basic way to tell if the unit has started successfully or not. This is a lower-level state that indicates more detailed information about the unit. This often varies by unit type, state, and the actual method in which the unit runs. This display is actually the default behavior of systemctl when called without additional commands, so you will see the same thing if you call systemctl with no arguments: For instance, to see all of the units that systemd has loaded or attempted to loadregardless of whether they are currently active, you can use the --all flag, like this: Some units become inactive after running, and some units that systemd attempted to load may have not been found on disk.
You can use other flags to filter these results. You will have to keep the --all flag so that systemctl allows non-active units to be displayed: We can tell systemctl to only display units of the type we are interested in. For example, to see only active service units, we can use: Since systemd will only read units that it thinks it needs, this will not necessarily include all of the available units on the system. To see every available unit file within the systemd paths, including those that systemd has not attempted to load, you can use the list-unit-files command instead: Since systemd has not necessarily read all of the unit definitions in this view, it only presents information about the files themselves.
The output has two columns: The state will usually be "enabled", "disabled", "static", or "masked". In this context, static means that the unit file does not contain an "install" section, which is used to enable a unit. As such, these units cannot be enabled. Usually, this means that the unit performs a one-off action or is used only as a dependency of another unit and should not be run by itself.
We will cover what "masked" means momentarily. Unit Management So far, we have been working with services and displaying information about the unit and unit files that systemd knows about. However, we can find out more specific information about units using some additional commands.
Displaying a Unit File To display the unit file that systemd has loaded into its system, you can use the cat command this was added in systemd version For instance, to see the unit file of the atd scheduling daemon, we could type: This can be important if you have modified unit files recently or if you are overriding certain options in a unit file fragment we will cover this later. Displaying Dependencies To see a unit's dependency tree, you can use the list-dependencies command: Dependencies, in this context, include those units that are either required by or wanted by the units above it.
The recursive dependencies are only displayed for. To recursively list all dependencies, include the --all flag. To show reverse dependencies units that depend on the specified unityou can add the --reverse flag to the command. Other flags that are useful are the --before and --after flags, which can be used to show units that depend on the specified unit starting before and after themselves, respectively.
Checking Unit Properties To see the low-level properties of a unit, you can use the show command. If you want to display a single property, you can pass the -p flag with the property name.
For instance, to see the conflicts that the sshd. This is called masking the unit, and is possible with the mask command: If you check the list-unit-files, you will see the service is now listed as masked: If you attempt to start the service, you will see a message like this: To unmask a unit, making it available for use again, simply use the unmask command: Editing Unit Files While the specific format for unit files is outside of the scope of this tutorial, systemctl provides built-in mechanisms for editing and modifying unit files if you need to make adjustments.
This functionality was added in systemd version The edit command, by default, will open a unit file snippet for the unit in question: For instance, for the nginx. Within this directory, a snippet will be created called override. When the unit is loaded, systemd will, in memory, merge the override snippet with the full unit file.
The snippet's directives will take precedence over those found in the original unit file. If you wish to edit the full unit file instead of creating a snippet, you can pass the --full flag: To remove any additions you have made, either delete the unit's. For instance, to remove a snippet, we could type: You can do this by typing: Like other units, the files that define targets can be identified by their suffix, which in this case is.
Targets do not do much themselves, but are instead used to group other units together. This can be used in order to bring the system to certain states, much like other init systems use runlevels. In some way this certainly is a simplification: However I would argue that this simplification is actually detrimental. Also, because the dependency information has never been encoded it is not available at runtime, effectively meaning that an administrator who tries to figure our why something happened, i.
Furthermore, the event logic turns around all dependencies, from the feet onto their head. Instead of minimizing the amount of work which is something that a good init system should focus on, as pointed out in the beginning of this blog storyit actually maximizes the amount of work to do during operations. Or in other words, instead of having a clear goal and only doing the things it really needs to do to reach the goal, it does one step, and then after finishing it, it does all steps that possibly could follow it.
Or to put it simpler: It's right the other way round: A good init system should start only what is needed, and that on-demand. Either lazily or parallelized and in advance. However it should not start more than necessary, particularly not everything installed that could use that service.
Finally, I fail to see the actual usefulness of the event logic. It appears to me that most events that are exposed in Upstart actually are not punctual in nature, but have duration: A device is plugged in, is available, and is plugged out again. A mount point is in the process of being mounted, is fully mounted, or is being unmounted. A power plug is plugged in, the system runs on AC, and the power plug is pulled. Only a minority of the events an init system or process supervisor should handle are actually punctual, most of them are tuples of start, condition, and stop.
This information is again not available in Upstart, because it focuses in singular events, and ignores durable dependencies. However, to me this appears mostly as an attempt to fix a system whose core design is flawed. Besides that Upstart does OK for babysitting daemons, even though some choices might be questionable see aboveand there are certainly a lot of missed opportunities see above, too.
There are other init systems besides sysvinit, Upstart and launchd. Most of them offer little substantial more than Upstart or sysvinit. The most interesting other contender is Solaris SMF, which supports proper dependencies between services. However, in many ways it is overly complex and, let's say, a bit academic with its excessive use of XML and new terminology for known things. It is also closely bound to Solaris specific features such as the contract system.
Putting it All Together: So, go and refill you coffee mug again. It's going to be worth it. You probably guessed it: Again, here's the code. And here's a quick rundown of its features, and the rationale behind them: It implements all of the features pointed out above and a few more. It is based around the notion of units.
Units have a name and a type. Since their configuration is usually loaded directly from the file system, these unit names are actually file names. There are several kinds of units: For compatibility with SysV we not only support our own configuration files for services, but also are able to read classic SysV init scripts, in particular we parse the LSB header, if it exists.
We also support classic FIFOs as transport. Each socket unit has a matching service unit, that is started if the first connection comes in on the socket or FIFO. If a device is marked for this via udev rules, it will be exposed as a device unit in systemd.
Properties set with udev can be used as configuration source to set dependencies for device units. Each automount unit has a matching mount unit, which is started i. Examples for this are: Primarily it has two intended use cases: And to ease support for system suspending: All these units can have dependencies between each other both positive and negative, i.
Mounts get an implicit dependency on the device they are mounted from. Mounts also gets implicit dependencies to mounts that are their prefixes i. A short list of other features: For each process that is spawned, you may control: If connected to a TTY for input systemd will make sure a process gets exclusive access, optionally waiting or enforcing it.
Every executed process gets its own cgroup currently by default in the debug subsystem, since that subsystem is not otherwise used and does not much more than the most basic process groupingand it is very easy to configure systemd to place services in cgroups that have been configured externally, for example via the libcgroups utilities. The native configuration files use a syntax that closely follows the well-known. It is a simple syntax for which parsers exist already in many software frameworks.
Also, this allows us to rely on existing tools for i18n for service descriptions, and similar. Administrators and developers don't need to learn a new syntax. As mentioned, we provide compatibility with SysV init scripts.
These init scripts are simply considered a different source of configuration, hence an easy upgrade path to proper systemd services is available. Optionally we can read classic PID files for services to identify the main pid of a daemon. Note that we make use of the dependency information from the LSB init script headers, and translate those into native systemd dependencies.
Upstart is unable to harvest and make use of that information. Boot-up on a plain Upstart system with mostly LSB SysV init scripts will hence not be parallelized, a similar system running systemd however will. In fact, for Upstart all SysV scripts together make one job that is executed, they are not treated individually, again in contrast to systemd where SysV init scripts are just another source of configuration and are all treated and controlled individually, much like any other native systemd service.
If the same unit is configured in multiple configuration sources e.
The interface part can even be inherited by dependency expressions, i. For socket activation we support full compatibility with the traditional inetd modes, as well as a very simple mode that tries to mimic launchd socket activation and is recommended for new services. The inetd mode only allows passing one socket to the started daemon, while the native mode supports passing arbitrary numbers of file descriptors. We also support one instance per connection, as well as one instance for all connections modes.
In the former mode we name the cgroup the daemon will be started in after the connection parameters, and utilize the templating logic mentioned above for this. This provides a nice way for the administrator to identify the various instances of a daemon and control their runtime individually. The native socket passing mode is very easily implementable in applications: Even though this socket passing logic is very simple to implement in daemons, we will provide a BSD-licensed reference implementation that shows how to do this.
We have ported a couple of existing daemons to this new scheme. This compatibility is in fact implemented with a FIFO-activated service, which simply translates these legacy requests to D-Bus requests. Effectively this means the old shutdown, poweroff and similar commands from Upstart and sysvinit continue to work with systemd. We also provide compatibility with utmp and wtmp.
KeepCup’s co-founder on the “crazy” 400% increase in sales fuelled by ABC’s “War on Waste” program
Possibly even to an extent that is far more than healthy, given how crufty utmp and wtmp are. It is completely orthogonal to Requires and Wants, which express a positive requirement dependency, either mandatory, or optional.
Then, there is Conflicts which expresses a negative requirement dependency. Finally, there are three further, less used dependency types. Then, we will verify if the transaction is consistent i. If it is not, systemd will try to fix it up, and removes non-essential jobs from the transaction that might remove the loop. Also, systemd tries to suppress non-essential jobs in the transaction that would stop a running service.
Non-essential jobs are those which the original request did not directly include but which where pulled in by Wants type of dependencies. Finally we check whether the jobs of the transaction contradict jobs that have already been queued, and optionally the transaction is aborted then. If all worked out and the transaction is consistent and minimized in its impact it is merged with all already outstanding jobs and added to the run queue.
Effectively this means that before executing a requested operation, we will verify that it makes sense, fixing it if possible, and only failing if it really cannot work.
This data can be used to cross-link daemons with their data in abrtd, auditd and syslog. Think of an UI that will highlight crashed daemons for you, and allows you to easily navigate to the respective UIs for syslog, abrt, and auditd that will show the data generated from and for this daemon on a specific run.
We support reexecution of the init process itself at any time. The daemon state is serialized before the reexecution and deserialized afterwards. That way we provide a simple way to facilitate init system upgrades as well as handover from an initrd daemon to the final daemon. Open sockets and autofs mounts are properly serialized away, so that they stay connectible all the time, in a way that clients will not even notice that the init system reexecuted itself.
Also, the fact that a big part of the service state is encoded anyway in the cgroup virtual file system would even allow us to resume execution without access to the serialization data.
The reexecution code paths are actually mostly the same as the init system configuration reloading code paths, which guarantees that reexecution which is probably more seldom triggered gets similar testing as reloading which is probably more common.
Starting the work of removing shell scripts from the boot process we have recoded part of the basic system setup in C and moved it directly into systemd. Among that is mounting of the API file systems i.
Server state is introspectable and controllable via D-Bus. This is not complete yet but quite extensive. While we want to emphasize socket-based and bus-name-based activation, and we hence support dependencies between sockets and services, we also support traditional inter-service dependencies.
We support multiple ways how such a service can signal its readiness: There's an interactive mode which asks for confirmation each time a process is spawned by systemd. You may enable it by passing systemd.
Normally you'd specify something like multi-user. It's far from complete but useful as a debugging tool. It's written in Vala yay! That unlocks a lot of functionality a system that is designed for portability to other operating systems cannot provide. Status All the features listed above are already implemented. Right now systemd can already be used as a drop-in replacement for Upstart and sysvinit at least as long as there aren't too many native upstart services yet.
Thankfully most distributions don't carry too many native Upstart services yet. However, testing has been minimal, our version number is currently at an impressive 0. Expect breakage if you run this in its current state. That said, overall it should be quite stable and some of us already boot their normal development systems with systemd in contrast to VMs only. YMMV, especially if you try this on distributions we developers don't use.
Where is This Going? The feature set described above is certainly already comprehensive. However, we have a few more things on our plate. I don't really like speaking too much about big plans but here's a short overview in which direction we will be pushing this: We want to add at least two more unit types: The problem set of a session manager and an init system are very similar: Using the same code for both uses hence suggests itself. Apple recognized that and does just that with launchd.
And so should we: I should probably note that all three of these features are already partially available in the current code base, but not complete yet. For example, already, you can run systemd just fine as a normal user, and it will detect that is run that way and support for this mode has been available since the very beginning, and is in the very core. It is also exceptionally useful for debugging! This works fine even without having the system otherwise converted to systemd for booting.
However, there are some things we probably should fix in the kernel and elsewhere before finishing work on this: None of these issues are really essential for systemd, but they'd certainly improve things.
You Want to See This in Action? Currently, there are no tarball releases, but it should be straightforward to check out the code from our repository. In addition, to have something to start with, here's a tarball with unit configuration files that allows an otherwise unmodified Fedora 13 system to work with systemd. We have no RPMs to offer you for now.
How to Determine and Fix Boot Issues in Linux
An easier way is to download this Fedora 13 qemu imagewhich has been prepared for systemd. In the grub menu you can select whether you want to boot the system with Upstart or systemd. Note that this system is minimally modified only. Service information is read exclusively from the existing SysV init scripts.
Hence it will not take advantage of the full socket and bus-based parallelization pointed out above, however it will interpret the parallelization hints from the LSB headers, and hence boots faster than the Upstart system, which in Fedora does not employ any parallelization at the moment.
The image is configured to output debug information on the serial console, as well as writing it to the kernel log buffer which you may access with dmesg.Before The Person :: Relationship Goals (Part 1)
You might want to run qemu configured with a virtual serial terminal. All passwords are set to systemd. Even simpler than downloading and booting the qemu image is looking at pretty screen-shots. Since an init system usually is well hidden beneath the user interface, some shots of systemadm and ps must do: That's systemadm showing all loaded units, with more detailed information on one of the getty instances. That's an excerpt of the output of ps xaf -eo pid,user,args,cgroup showing how neatly the processes are sorted into the cgroup of their service.
The fourth column is the cgroup, the debug: This is only temporary. Note that both of these screenshots show an only minimally modified Fedora 13 Live CD installation, where services are exclusively loaded from the existing SysV init scripts. Hence, this does not use socket or bus activation for any existing service. Sorry, no bootcharts or hard data on start-up times for the moment.
We'll publish that as soon as we have fully parallelized all services from the default Fedora install. Then, we'll welcome you to benchmark the systemd approach, and provide our own benchmark data as well. Well, presumably everybody will keep bugging me about this, so here are two numbers I'll tell you. However, they are completely unscientific as they are measured for a VM single CPU and by using the stop timer in my watch. Fedora 13 booting up with Upstart takes 27s, with systemd we reach 24s from grub to gdm, same system, same settings, shorter value of two bootups, one immediately following the other.
Note however that this shows nothing more than the speedup effect reached by using the LSB dependency information parsed from the init script headers for parallelization. Socket or bus based activation was not utilized for this, and hence these numbers are unsuitable to assess the ideas pointed out above. Also, systemd was set to debug verbosity levels on a serial console. So again, this benchmark data has barely any value. Writing Daemons An ideal daemon for use with systemd does a few things differently then things were traditionally done.
Later on, we will publish a longer guide explaining and suggesting how to write a daemon for use with this systemd. Basically, things get simpler for daemon developers: We ask daemon writers not to fork or even double fork in their processes, but run their event loop from the initial process systemd starts for you. Also, don't call setsid. Don't drop user privileges in the daemon itself, leave this to systemd and configure it in systemd service configuration files.
There are exceptions here. For example, for some daemons there are good reasons to drop privileges inside the daemon code, after an initialization phase that requires elevated privileges.
Don't write PID files Grab a name on the bus You may rely on systemd for logging, you are welcome to log whatever you need to log to stderr. Let systemd create and watch sockets for you, so that socket activation works.
# - tech-ctte: Decide which init system to default to in Debian. - Debian Bug report logs
The list above is very similar to what Apple recommends for daemons compatible with launchd. It should be easy to extend daemons that already support launchd activation to support systemd activation as well. Note that systemd supports daemons not written in this style perfectly as well, already for compatibility reasons launchd has only limited support for that.
As mentioned, this even extends to existing inetd capable daemons which can be used unmodified for socket activation by systemd. So, yes, should systemd prove itself in our experiments and get adopted by the distributions it would make sense to port at least those services that are started by default to use socket or bus-based activation.
We have written proof-of-concept patchesand the porting turned out to be very easy. Also, we can leverage the work that has already been done for launchd, to a certain extent. Moreover, adding support for socket-based activation does not make the service incompatible with non-systemd systems. FAQs Who's behind this? Well, the current code-base is mostly my work, Lennart Poettering Red Hat.
However the design in all its details is result of close cooperation between Kay Sievers Novell and me. Is this a Red Hat project? No, this is my personal side project.
Also, let me emphasize this: