Tractor driving imposes a lot of physical and mental stress upon the operator. If the operator's seat is not comfortable, his work performance may be poor and there is also a possibility of accidents. The optimal design of tractor seat may be achieved by integrating anthropometric data with other technical features of the design. This paper reviews the existing information on the tractor seat design that considers anthropometry and biomechanical factors and gives an approach for seat design based on anthropometric data. The anthropometric dimensions, i.e. popliteal height sitting (5th percentile), hip breadth sitting (95th percentile), buttock popliteal length (5th percentile), interscye breadth (5th and 95th percentile) and sitting acromion height (5th percentile) of agricultural workers need to be taken into consideration for design of seat height, seat pan width, seat pan length, seat backrest width and seat backrest height, respectively, of a tractor. The seat dimensions recommended for tractor operator's comfort based on anthropometric data of 5434 Indian male agricultural workers were as follows: seat height of 380 mm, seat pan width of 420–450 mm, seat backrest width of 380–400 mm (bottom) and 270–290 mm (top), seat pan length of 370±10 mm, seat pan tilt of 5–7° backward and seat backrest height of 350 mm.
Relevance to industry
The approach presented in this paper for tractor seat design based on anthropometric considerations will help the tractor seat designers to develop and introduce seats suiting to the requirements of the user population. This will not only enhance the comfort of the tractor operators but may also help to reduce the occupational health problems of tractor operators. 相似文献
Data quality became significant with the emergence of data warehouse systems. While accuracy is intrinsic data quality, validity of data presents a wider perspective, which is more representational and contextual in nature. Through our article we present a different perspective in data collection and collation. We focus on faults experienced in data sets and present validity as a function of allied parameters such as completeness, usability, availability and timeliness for determining the data quality. We also analyze the applicability of these metrics and apply modifications to make it conform to IoT applications. Another major focus of this article is to verify these metrics on aggregated data set instead of separate data values. This work focuses on using the different validation parameters for determining the quality of data generated in a pervasive environment. Analysis approach presented is simple and can be employed to test the validity of collected data, isolate faults in the data set and also measure the suitability of data before applying algorithms for analysis. On analyzing the data quality of the two data sets on the basis of above-mentioned parameters. We show that validity for data set 1 was found to be 75% while it was found to be 67% only for data set 2. Availability and data freshness metrics performance were analyzed graphically. It was found that for data set 1, data freshness was better while availability metric was found better for data set 2. Usability obtained for data set 2 was 86% which was higher as compared to data set 1 whose usability metric was 69%. Thus, this work presents methods that can be leveraged for estimating data quality that can be beneficial in various IoT based industries which are essentially data centric and the decisions made by them depends upon the validity of data.
From information security point of view, an enterprise is considered as a collection of assets and their interrelationships.
These interrelationships may be built into the enterprise information infrastructure, as in the case of connection of hardware
elements in network architecture, or in the installation of software or in the information assets. As a result, access to
one element may enable access to another if they are connected. An enterprise may specify conditions on the access of certain
assets in certain mode (read, write etc.) as policies. The interconnection of assets, along with specified policies, may lead
to managerial vulnerabilities in the enterprise information system. These vulnerabilities, if exploited by threats, may cause
disruption to the normal functioning of information systems. This paper presents a formal methodology for detection of managerial
vulnerabilities of, and threats to, enterprise information systems in linear time. 相似文献
Intelligent computing system (ICS) and knowledge-based system (KBS) have been widely used in the detection and interpretation of EMG (electromyography) based diseases. Heuristic-based detection methods of EMG parameters for a particular disease have also been reported in the literature but little effort has been made by researchers to combine rule-based reasoning (RBR) and case-based reasoning of KBS, and ANN (artificial neural nets) of ICS. Integrating the methods in KBS and ICS improves the computational and reasoning efficiency of the problem-solving strategy. We have developed an integrated model of CBR and RBR for generating cases, and ANN for matching cases for the interpretation and diagnosis of neuromuscular diseases. We have hierarchically structured the neuromuscular diseases in terms of their physio-pyscho (muscular, cognitive and psychological) parameters and EMG based parameters (amplitude, duration, phase etc.). Cumulative confidence factor is computed at different node from lowest to highest level of hierarchal structure in the process of diagnosis of the neuromuscular diseases. The diseases considered are Duchenne muscular dystrophy, Polymyostits, Endocrine myopathy, Metabolic myopathy, Neuropathy, Poliomyletis and Myasthenia gravis. The basic objective of this work is to develop an integrated model of RBR, CBR and ANN in which RBR is used to hierarchically correlate the sign and symptom of the disease and also to compute cumulative confidence factor (CCF) of the diseases. CBR is used for diagnosing the neuromuscular diseases and to find the relative importance of sign and symptoms of a diseases to other diseases and ANN is used for matching process in CBR. 相似文献
Multiple data streams coming out of a complex system form the observable state of the system. The streams may correspond to various sensors attached with the system or outcome of internal processes. Such stream data may consist of multiple attributes and may differ in terms of their frequency of generation and observation. The streams may have dependency among themselves. One will have to rely on such data streams for monitoring the health of the system or to take any corrective measure. Predicting the value of certain stream data is an important task that can help one to take decision and act accordingly. In this work, a simple but generic visualization of a complex system is presented and thereafter a linear regression-based dynamic model for short-term prediction is proposed. The model is based on the past history of the attributes of multiple streams as suggested by the domain experts. But, it automatically determines the meaningful attributes and reformulates the model. The model is also re-computed if the prediction error exceeds the allowable tolerance. All these make the model dynamic. Experiment is carried out with stock market data streams to predict the close value well in advance. It is observed that in terms of quality of prediction and performance metric, the proposed model is quite effective. 相似文献
Fractional advection–dispersion equation (FADE) is a generalization of the classical ADE in which the first order time derivative and first and second order space derivatives are replaced by Caputo derivatives of orders 0<α?1, 0<β?1 and 1<γ?2, respectively. We use Caputo definition to avoid (i) mass balance error, (ii) hyper-singular improper integral, (iii) non-zero derivative of constant, and (iv) fractional derivative involved in the initial condition which is often ill-defined. We present an analytic algorithm to solve FADE based on homotopy analysis method (HAM) which has the advantage of controlling the region and rate of convergence of the solution series via the auxiliary parameter ? over the variational iteration method (VIM) and homotopy perturbation method (HPM). We find that the proposed method converges to the numerical/exact solution of the ADE as the fractional orders α, β, γ tend to their integral values. Numerical examples are given to illustrate the proposed algorithm. Example 5 describes the intermediate process between advection and dispersion via Caputo fractional derivative. 相似文献
A 33.7 MHz heavy-ion radio frequency quadrupole (RFQ) linear accelerator has been designed, built, and tested. It is a four-rod-type RFQ designed for acceleration of 1.38 keVu, qA> or =116 ions to about 29 keVu. Transmission efficiencies of about 85% and 80% have been measured for the unanalyzed and analyzed beams, respectively, of oxygen ((16)O(2+), (16)O(3+), (16)O(4+)), nitrogen ((14)N(3+), (14)N(4+)), and argon ((40)Ar(4+)). The system design and measurements along with results of beam acceleration test will be presented. 相似文献
Concurrent programs that embed specifications of synchronizations in the body of their component are difficult to extend and modify. Small changes in a concurrent program, particularly changes in the interactions among components, may require re-implementation of a large number of components. Even specifications of components cannot be reused easily. This paper presents a concurrent program composition mechanism in which both specification and implementation of computations and interactions are completely separated. Separation of specifications and implementations facilitates extensions and modifications of programs by allowing one to separately change the implementations of computations and interactions. It also supports their reusability. The paper also describes the design and implementation of a concurrent object-oriented programming language based on this model, including a compiler for the language, and reports on the execution behavior of programs written in the language. 相似文献
Quantitative proteomics can be used for the identification of cancer biomarkers that could be used for early detection, serve as therapeutic targets, or monitor response to treatment. Several quantitative proteomics tools are currently available to study differential expression of proteins in samples ranging from cancer cell lines to tissues to body fluids. 2-DE, which was classically used for proteomic profiling, has been coupled to fluorescence labeling for differential proteomics. Isotope labeling methods such as stable isotope labeling with amino acids in cell culture (SILAC), isotope-coded affinity tagging (ICAT), isobaric tags for relative and absolute quantitation (iTRAQ), and (18) O labeling have all been used in quantitative approaches for identification of cancer biomarkers. In addition, heavy isotope labeled peptides can be used to obtain absolute quantitative data. Most recently, label-free methods for quantitative proteomics, which have the potential of replacing isotope-labeling strategies, are becoming popular. Other emerging technologies such as protein microarrays have the potential for providing additional opportunities for biomarker identification. This review highlights commonly used methods for quantitative proteomic analysis and their advantages and limitations for cancer biomarker analysis. 相似文献