javaee论坛

普通会员

225648

帖子

355

回复

369

积分

楼主
发表于 2017-08-22 03:01:44 | 查看: 329 | 回复: 0
ps:从openstack文档里摘录一些概念出来参考参考。

 Identity Service Concepts

The Identity Service performs the following functions:
        • User management :Tracks users and their permissions.
        • Service catalog: Provides a catalog of available services with their API endpoints.
To understand the Identity Service, you must understand the following concepts:

User    Digital representation of a person, system, or service who uses
OpenStack cloud services. The Identity Service validates that incoming
requests are made by the user who claims to be making the call. Users
have a login and may be assigned tokens to access resources. Users
can be directly assigned to a particular tenant and behave as if they
are contained in that tenant
Credentials    Data that is known only by a user that proves who they are. In the
Identity Service, examples are: User name and password, user name
and API key, or an authentication token provided by the Identity
Service
Authentication

    The act of confirming the identity of a user. The Identity Service
confirms an incoming request by validating a set of credentials
supplied by the user.

    These credentials are initially a user name and password or a user
name and API key. In response to these credentials, the Identity
Service issues an authentication token to the user, which the user
provides in subsequent requests.

Token    An arbitrary bit of text that is used to access resources. Each token
has a scope which describes which resources are accessible with it. A
token may be revoked at any time and is valid for a finite duration.
    While the Identity Service supports token-based authentication in
this release, the intention is for it to support additional protocols in
the future. The intent is for it to be an integration service foremost,
and not aspire to be a full-fledged identity store and management
solution
Tenant    A container used to group or isolate resources and/or identity objects.
Depending on the service operator, a tenant may map to a customer,
account, organization, or project.
Service    An OpenStack service, such as Compute (Nova), Object Storage
(Swift), or Image Service (Glance). Provides one or more endpoints
through which users can access resources and perform operations
Endpoint    An network-accessible address, usually described by URL, from where
you access a service. If using an extension for templates, you can
create an endpoint template, which represents the templates of all
the consumable services that are available across the regions
Role

    A personality that a user assumes that enables them to perform a
specific set of operations. A role includes a set of rights and privileges.
A user assuming that role inherits those rights and privileges

    In the Identity Service, a token that is issued to a user includes the
list of roles that user has. Services that are being called by that user
determine how they interpret the set of roles a user has and to which
operations or resources each role grants access

The following diagram shows the Identity Service process flow:

 

  Image Service Concepts

 The Image Service includes the following components:
    • glance-api : Accepts Image API calls for image discovery, retrieval, and storage.


    • glance-registry : Stores, processes, and retrieves metadata about images. Metadata includes size, type, and so on.


    • Database : Stores image metadata. You can choose your database depending on your preference. Most deployments use MySQL or SQlite.


    • Storage repository for image files : the Object Storage Service is the image repository. However, you can configure a different repository. The Image Service supports normal file systems, RADOS block devices, Amazon S3, and HTTP. Some of these choices are limited to read-only usage.

     A number of periodic processes run on the Image Service to support caching. Replicationservices ensures consistency and availability through the cluster. Other periodic processesinclude auditors, updaters, and reapers.

     As shown in the section called “Conceptual architecture”[2], the Image Service is central to the overall IaaS picture. It accepts API requests for images or image metadata from end users or Compute components and can store its disk files in the Object Storage Service.

 

Compute Service Concepts

    The Compute Service is a cloud computing fabric controller, the main part of an IaaS system. It can be used for hosting and managing cloud computing systems. The main
modules are implemented in Python.


    Compute interacts with the Identity service for authentication, Image service for images,and the Dashboard service for the user and administrative interface. Access to images is limited by project and by user; quotas are limited per project (for example, the number of instances). The Compute service is designed to scale horizontally on standard hardware, and can download images to launch instances as required.

    The Compute Service is made up of the following functional areas and their underlying components:

 API
nova-api service. Accepts and responds to end user compute API calls. Supports the OpenStack Compute API, the Amazon EC2 API, and a special Admin API for privileged users to perform administrative actions. Also, initiates most orchestration activities, such as running an instance, and enforces some policies.


nova-api-metadata service. Accepts metadata requests from instances. The novaapi-metadataservice is generally only used when you run in multi-host mode with nova-networkinstallations. For details, see Metadata servicein the Cloud Administrator Guide.


Compute core

 • nova-compute process. A worker daemon that creates and terminates virtual machine instances through hypervisor APIs. For example, XenAPI for XenServer/XCP, libvirt for KVM or QEMU, VMwareAPI for VMware, and so on. The process by which it does so is fairly complex but the basics are simple: Accept actions from the queue and perform a series of system commands, like launching a KVM instance, to carry them out while updating state in the database

nova-scheduler process. Conceptually the simplest piece of code in Compute. Takes a virtual machine instance request from the queue and determines on which compute
server host it should run.


nova-conductor module. Mediates interactions between nova-computeand the database. Aims to eliminate direct accesses to the cloud database made by novacompute. The nova-conductormodule scales horizontally. However, do not deploy it on any nodes where nova-computeruns. For more information, see A new Nova service: nova-conductor.

 

Networking for VMs

nova-networkworker daemon. Similar to nova-compute, it accepts networking tasks from the queue and performs tasks to manipulate the network, such as setting up bridging interfaces or changing iptables rules. This functionality is being migrated to OpenStack Networking, which is a separate OpenStack service.


nova-dhcpbridge script. Tracks IP address leases and records them in the database by using the dnsmasq dhcp-scriptfacility. This functionality is being migrated to OpenStack Networking. OpenStack Networking provides a different script.

Console interface


nova-consoleauth daemon. Authorizes tokens for users that console proxies provide. See nova-novncproxyand nova-xvpnvcproxy. This service must be running for
console proxies to work. Many proxies of either type can be run against a single novaconsoleauthservice in a cluster configuration. For information, see About ovaconsoleauth.


nova-novncproxy daemon. Provides a proxy for accessing running instances through a VNC connection. Supports browser-based novnc clients.


nova-console daemon. Deprecated for use with Grizzly. Instead, the novaxvpnvncproxyis used.


nova-xvpnvncproxy daemon. A proxy for accessing running instances through a VNC connection. Supports a Java client specifically designed for OpenStack.


nova-cert daemon. Manages x509 certificates.


Image Management (EC2 scenario)


nova-objectstore daemon. Provides an S3 interface for registering images with the Image Service. Mainly used for installations that must support euca2ools. The euca2ools
tools talk to nova-objectstorein S3 language, and nova-objectstoretranslates S3 requests into Image Service requests.


euca2ools client. A set of command-line interpreter commands for managing cloud resources. Though not an OpenStack module, you can configure nova-apito support this EC2 interface. For more information, see the Eucalyptus 2.0 Documentation.

Command Line Interpreter/Interfaces


nova client. Enables users to submit commands as a tenant administrator or end user.


nova-manage client. Enables cloud administrators to submit commands.


Other components


The queue. A central hub for passing messages between daemons. Usually implemented
with RabbitMQ, but could be any AMPQ message queue, such as Apache Qpidor Zero
MQ.


SQL database. Stores most build-time and runtime states for a cloud infrastructure.Includes instance types that are available for use, instances in use, available networks,
and projects. Theoretically, OpenStack Compute can support any database that SQLAlchemy supports, but the only databases widely used are sqlite3 databases (only appropriate for test and development work), MySQL, and PostgreSQL.

    The Compute Service interacts with other OpenStack services: Identity Service for authentication, Image Service for images, and the OpenStack Dashboard for a web
interface.

 

Block Storage Service Concepts

The Block Storage Service enables management of volumes, volume snapshots, and volumetypes. It includes the following components:


cinder-api. Accepts API requests and routes them to cinder-volumefor action.


cinder-volume. Responds to requests to read from and write to the Object Storage database to maintain state, interacting with other processes (like cinder-scheduler)
through a message queue and directly upon block storage providing hardware or software. It can interact with a variety of storage providers through a driver architecture.


cinder-scheduler daemon. Like the nova-scheduler, picks the optimal block storage provider node on which to create the volume.


Messaging queue. Routes information between the Block Storage Service processes.


The Block Storage Service interacts with Compute to provide volumes for instances.

 

Object Storage Service Concepts

The Object Storage Service is a highly scalable and durable multi-tenant object storage system for large amounts of unstructured data at low cost through a RESTful http API.
It includes the following components:
Proxy Servers (swift-proxy-server). Accepts Object Storage API and raw HTTP requests to upload files, modify metadata, and create containers. It also serves file or container listings to web browsers. To improve performance, the proxy server can use an optional cache usually deployed with memcache.


Account servers (swift-account-server). Manage accounts defined with the Object Storage Service.


Container servers (swift-container-server). Manage a mapping of containers, or folders, within the Object Storage Service.


Object servers (swift-object-server). Manage actual objects, such as files, on the storage nodes.


A number of periodic processes. Performs housekeeping tasks on the large data store.The replication services ensure consistency and availability through the cluster. Other periodic processes include auditors, updaters, and reapers.


Configurable WSGI middleware, which is usually the Identity Service, handles authentication.

 

Neutron Concepts

Like Nova Networking, Neutron manages software-defined networking for your OpenStack installation. However, unlike Nova Networking, you can configure Neutron for advanced virtual network topologies, such as per-tenant private networks and more.


Neutron has the following object abstractions: networks, subnets, and routers. Each has functionality that mimics its physical counterpart: networks contain subnets, and routers route traffic between different subnet and networks.


Any given Neutron set up has at least one external network. This network, unlike the other networks, is not merely a virtually defined network. Instead, it represents the view into a slice of the external network that is accessible outside the OpenStack installation. IP addresses on the Neutron external network are accessible by anybody physically on the outside network. Because this network merely represents a slice of the outside network, DHCP is disabled on this network.

In addition to external networks, any Neutron set up has one or more internal networks. These software-defined networks connect directly to the VMs. Only the VMs on any given internal network, or those on subnets connected through interfaces to a similar router, can access VMs connected to that network directly.


For the outside network to access VMs, and vice versa, routers between the networks are needed. Each router has one gateway that is connected to a network and many interfaces that are connected to subnets. Like a physical router, subnets can access machines on other subnets that are connected to the same router, and machines can access the outside network through the gateway for the router.


Additionally, you can allocate IP addresses on an external networks to ports on the internal network. Whenever something is connected to a subnet, that connection is called a port.You can associate external network IP addresses with ports to VMs. This way, entities on the outside network can access VMs.


Neutron also supports security groups. Security groups enable administrators to define firewall rules in groups. A VM can belong to one or more security groups, and Neutron
applies the rules in those security groups to block or unblock ports, port ranges, or traffic types for that VM.


Each plug-in that Neutron uses has its own concepts. While not vital to operating Neutron, understanding these concepts can help you set up Neutron. All Neutron installations use a core plug-in and a security group plug-in (or just the No-Op security group plug-in). Additionally, Firewall-as-a-service (FWaaS) and Load-balancing-as-a-service LBaaS) plug-ins are available.

 

Open vSwitch Concepts

The Open vSwitch plug-in is one of the most popular core plug-ins. Open vSwitch configurations consists of bridges and ports. Ports represent connections to other things,
such as physical interfaces and patch cables. Packets from any given port on a bridge is shared with all other ports on that bridge. Bridges can be connected through Open vSwitch virtual patch cables or through Linux virtual Ethernet cables (veth). Additionally, bridges appear as network interfaces to Linux, so you can assign IP addresses to them.


In Neutron, the integration bridge, called br-int, connects directly to the VMs and associated services. The external bridge, called br-ex, connects to the external network. Finally, the VLAN configuration of the Open vSwitch plug-in uses bridges associated with each physical network.


In addition to defining bridges, Open vSwitch has OpenFlow, which enables you to define networking flow rules. Certain configurations use these rules to transfer packets between VLANs.


Finally, some configurations of Open vSwitch use network namespaces that enable Linux to group adapters into unique namespaces that are not visible to other namespaces, which allows the same network node to manage multiple Neutron routers.
With Open vSwitch, you can use two different technologies to create the virtual networks: GRE or VLANs.

Generic Routing Encapsulation (GRE) is the technology used in many VPNs. It wraps IP packets to create entirely new packets with different routing information. When the new packet reaches its destination, it is unwrapped, and the underlying packet is routed. To use GRE with Open vSwitch, Neutron creates GRE tunnels. These tunnels are ports on a bridge and enable bridges on different systems to act as though they were one bridge, which allows the compute and network nodes to act as one for the purposes of routing.


Virtual LANs (VLANs), on the other hand, use a special modification to the Ethernet header. They add a 4-byte VLAN tag that ranges from 1 to 4094 (the 0 tag is special, and the 4095 tag, made of all ones, is equivalent to an untagged packet). Special NICs, switches, and routers know how to interpret the VLAN tags, as does Open vSwitch.  Packets tagged for one VLAN are only shared with other devices configured to be on that VLAN, even through all devices are on the same physical network.

The most common security group driver used with Open vSwitch is the Hybrid IPTables/Open vSwitch plug-in. It uses a combination for IPTables and OpenFlow rules. Use the
IPTables tool to create firewalls and set up NATs on Linux. This tool uses a complex rule system and chains of rules to accommodate the complex rules required by Neutron security groups.

 

GRE tunnelingis simpler to set up, since it does not require any special configuration from any physical network hardware. However, it is its own type of protocol, and thus
is harder to filter, if you are concerned about filtering traffic on the physical network. Additionally, the configuration given here does not use namespaces, meaning you can have only one router per network node (however, this can be overcome by enabling namespacing, and potentially veth, as specified in the section detailing how to use
VLANs with OVS).


On the other hand, VLAN taggingmodifies the ethernet header of packets, meaning that packets can be filtered on the physical network via normal methods. However, not all NICs handle the increased packet size of VLAN-tagged packets well, and you might need to complete additional configuration on physical network hardware to ensure that your Neutron VLANs do not interfere with any other VLANs on your network, and to ensure that any physical network hardware between nodes does not strip VLAN tags.

 

Orchestration Service Concepts

 The Orchestration service provides a template-based orchestration for describing a cloud application by running OpenStack API calls to generate running cloud applications. The software integrates other core components of OpenStack into a one-file template system. The templates enable you to create most OpenStack resource types, such as instances, floating IPs, volumes, security groups, users, and so on. Also, provides some more advanced functionality, such as instance high availability, instance auto-scaling, and nested stacks. By providing very tight integration with other OpenStack core projects, all OpenStack core projects could receive a larger user base.


The service enables deployers to integrate with the Orchestration service directly or through custom plug-ins.


The Orchestration service consists of the following components:


heat tool. A CLI that communicates with the heat-api to run AWS CloudFormation APIs.End developers could also use the Orchestration REST API directly.


heat-api component. Provides an OpenStack-native REST API that processes API requests by sending them to the heat-engine over RPC.


heat-api-cfn component. Provides an AWS Query API that is compatible with AWS CloudFormation and processes API requests by sending them to the heat-engine over RPC.


heat-engine. Orchestrates the launching of templates and provides events back to the API consumer.

 

Metering/Monitoring Concepts

The Metering Service is designed to:


• Efficiently collect the metering data about the CPU and network costs.


• Collect data by monitoring notifications sent from services or by polling the infrastructure.


• Configure the type of collected data to meet various operating requirements. Accessing and inserting the metering data through the REST API.


• Expand the framework to collect custom usage data by additional plug-ins.


• Produce signed metering messages that cannot be repudiated.


The system consists of the following basic components:


A compute agent (ceilometer-agent-compute). Runs on each compute node and polls for resource utilization statistics. There may be other types of agents in the future,
but for now we will focus on creating the compute agent.


A central agent (ceilometer-agent-central). Runs on a central management server to poll for resource utilization statistics for resources not tied to instances or
compute nodes.


A collector (ceilometer-collector). Runs on one or more central management servers to monitor the message queues (for notifications and for metering data coming
from the agent). Notification messages are processed and turned into metering messages and sent back out onto the message bus using the appropriate topic. Metering
messages are written to the data store without modification.

 
An alarm notifier (ceilometer-alarm-notifier). runs on one or more central management servers to allow settting alarms based on threshold evaluation for a collection of samples.


• A data store. A database capable of handling concurrent writes (from one or more collector instances) and reads (from the API server).

An API server (ceilometer-api). Runs on one or more central management servers to provide access to the data from the data store. These services communicate using the
standard OpenStack messaging bus. Only the collector and API server have access to the data store.


These services communicate by using the standard OpenStack messaging bus. Only the collector and API server have access to the data store.

 

 

 

 

 

 

 

 

 

 


您需要登录后才可以回帖 登录 | 立即注册

触屏版| 电脑版

技术支持 历史网 V2.0 © 2016-2017