Search results for: generalized inverse matrix approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6373

Search results for: generalized inverse matrix approach

3493 Mechanical Design and Theoretical Analysis of a Four Fingered Prosthetic Hand Incorporating Embedded SMA Bundle Actuators

Authors: Kevin T. O'Toole, Mark M. McGrath

Abstract:

The psychological and physical trauma associated with the loss of a human limb can severely impact on the quality of life of an amputee rendering even the most basic of tasks very difficult. A prosthetic device can be of great benefit to the amputee in the performance of everyday human tasks. This paper outlines a proposed mechanical design of a 12 degree-of-freedom SMA actuated artificial hand. It is proposed that the SMA wires be embedded intrinsically within the hand structure which will allow for significant flexibility for use either as a prosthetic hand solution, or as part of a complete lower arm prosthetic solution. A modular approach is taken in the design facilitating ease of manufacture and assembly, and more importantly, also allows the end user to easily replace SMA wires in the event of failure. A biomimetric approach has been taken during the design process meaning that the artificial hand should replicate that of a human hand as far as is possible with due regard to functional requirements. The proposed design has been exposed to appropriate loading through the use of finite element analysis (FEA) to ensure that it is structurally sound. Theoretical analysis of the mechanical framework was also carried out to establish the limits of the angular displacement and velocity of the finger tip as well finger tip force generation. A combination of various polymers and Titanium, which are suitably lightweight, are proposed for the manufacture of the design.

Keywords: Hand prosthesis, mechanical design, shape memory alloys, wire bundle actuation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2627
3492 Study of Mechanical Properties of Glutarylated Jute Fiber Reinforced Epoxy Composites

Authors: V. Manush Nandan, K. Lokdeep, R. Vimal, K. Hari Hara Subramanyan, C. Aswin, V. Logeswaran

Abstract:

Natural fibers have attained the potential market in the composite industry because of the huge environmental impact caused by synthetic fibers. Among the natural fibers, jute fibers are the most abundant plant fibers which are manufactured mainly in countries like India. Even though there is a good motive to utilize the natural supplement, the strength of the natural fiber composites is still a topic of discussion. In recent days, many researchers are showing interest in the chemical modification of the natural fibers to increase various mechanical and thermal properties. In the present study, jute fibers have been modified chemically using glutaric anhydride at different concentrations of 5%, 10%, 20%, and 30%. The glutaric anhydride solution is prepared by dissolving the different quantity of glutaric anhydride in benzene and dimethyl-sulfoxide using sodium formate catalyst. The jute fiber mats have been treated by the method of retting at various time intervals of 3, 6, 12, 24, and 36 hours. The modification structure of the treated fibers has been confirmed with infrared spectroscopy. The degree of modification increases with an increase in retention time, but higher retention time has damaged the fiber structure. The unmodified fibers and glutarylated fibers at different retention times are reinforced with epoxy matrix under room temperature. The tensile strength and flexural strength of the composites are analyzed in detail. Among these, the composite made with glutarylated fiber has shown good mechanical properties when compared to those made of unmodified fiber.

Keywords: Flexural properties, glutarylation, glutaric anhydride, tensile properties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 705
3491 Tractive Performance Prediction for Intelligent Air-Cushion Track Vehicle: Fuzzy Logic Approach

Authors: Altab Hossain, Ataur Rahman, A. K. M. Mohiuddin, Yulfian Aminanda

Abstract:

Fuzzy logic approach is used in this study to predict the tractive performance in terms of traction force, and motion resistance for an intelligent air cushion track vehicle while it operates in the swamp peat. The system is effective to control the intelligent air –cushion system with measuring the vehicle traction force (TF), motion resistance (MR), cushion clearance height (CH) and cushion pressure (CP). Sinkage measuring sensor, magnetic switch, pressure sensor, micro controller, control valves and battery are incorporated with the Fuzzy logic system (FLS) to investigate experimentally the TF, MR, CH, and CP. In this study, a comparison for tractive performance of an intelligent air cushion track vehicle has been performed with the results obtained from the predicted values of FLS and experimental actual values. The mean relative error of actual and predicted values from the FLS model on traction force, and total motion resistance are found as 5.58 %, and 6.78 % respectively. For all parameters, the relative error of predicted values are found to be less than the acceptable limits. The goodness of fit of the prediction values from the FLS model on TF, and MR are found as 0.90, and 0.98 respectively.

Keywords: Cushion pressure, Fuzzy logic, Motion resistance, Traction force.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1499
3490 Design of an Eddy Current Brake System for the Use of Roller Coasters Based on a Human Factors Engineering Approach

Authors: Adam L. Yanagihara, Yong Seok Park

Abstract:

The goal of this paper is to converge upon a design of a brake system that could be used for a roller coaster found at an amusement park. It was necessary to find what could be deemed as a “comfortable” deceleration so that passengers do not feel as if they are suddenly jerked and pressed against the restraining harnesses. A human factors engineering approach was taken in order to determine this deceleration. Using a previous study that tested the deceleration of transit vehicles, it was found that a -0.45 G deceleration would be used as a design requirement to build this system around. An adjustable linear eddy current brake using permanent magnets would be the ideal system to use in order to meet this design requirement. Anthropometric data were then used to determine a realistic weight and length of the roller coaster that the brake was being designed for. The weight and length data were then factored into magnetic brake force equations. These equations were used to determine how the brake system and the brake run layout would be designed. A final design for the brake was determined and it was found that a total of 12 brakes would be needed with a maximum braking distance of 53.6 m in order to stop a roller coaster travelling at its top speed and loaded to maximum capacity. This design is derived from theoretical calculations, but is within the realm of feasibility.

Keywords: Eddy current brake, engineering design, human factors engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1178
3489 Damping and Stability Evaluation for the Dynamical Hunting Motion of the Bullet Train Wheel Axle Equipped with Cylindrical Wheel Treads

Authors: Barenten Suciu

Abstract:

Classical matrix calculus and Routh-Hurwitz stability conditions, applied to the snake-like motion of the conical wheel axle, lead to the conclusion that the hunting mode is inherently unstable, and its natural frequency is a complex number. In order to analytically solve such a complicated vibration model, either the inertia terms were neglected, in the model designated as geometrical, or restrictions on the creep coefficients and yawing diameter were imposed, in the so-called dynamical model. Here, an alternative solution is proposed to solve the hunting mode, based on the observation that the bullet train wheel axle is equipped with cylindrical wheels. One argues that for such wheel treads, the geometrical hunting is irrelevant, since its natural frequency becomes nil, but the dynamical hunting is significant since its natural frequency reduces to a real number. Moreover, one illustrates that the geometrical simplification of the wheel causes the stabilization of the hunting mode, since the characteristic quartic equation, derived for conical wheels, reduces to a quadratic equation of positive coefficients, for cylindrical wheels. Quite simple analytical expressions for the damping ratio and natural frequency are obtained, without applying restrictions into the model of contact. Graphs of the time-depending hunting lateral perturbation, including the maximal and inflexion points, are presented both for the critically-damped and the over-damped wheel axles.

Keywords: Bullet train, dynamical hunting, cylindrical wheels, damping, stability, creep, vibration analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 760
3488 Determination of Optimal Stress Locations in 2D–9 Noded Element in Finite Element Technique

Authors: Nishant Shrivastava, D. K. Sehgal

Abstract:

In Finite Element Technique nodal stresses are calculated through displacement as nodes. In this process, the displacement calculated at nodes is sufficiently good enough but stresses calculated at nodes are not sufficiently accurate. Therefore, the accuracy in the stress computation in FEM models based on the displacement technique is obviously matter of concern for computational time in shape optimization of engineering problems. In the present work same is focused to find out unique points within the element as well as the boundary of the element so, that good accuracy in stress computation can be achieved. Generally, major optimal stress points are located in domain of the element some points have been also located at boundary of the element where stresses are fairly accurate as compared to nodal values. Then, it is subsequently concluded that there is an existence of unique points within the element, where stresses have higher accuracy than other points in the elements. Therefore, it is main aim is to evolve a generalized procedure for the determination of the optimal stress location inside the element as well as at the boundaries of the element and verify the same with results from numerical experimentation. The results of quadratic 9 noded serendipity elements are presented and the location of distinct optimal stress points is determined inside the element, as well as at the boundaries. The theoretical results indicate various optimal stress locations are in local coordinates at origin and at a distance of 0.577 in both directions from origin. Also, at the boundaries optimal stress locations are at the midpoints of the element boundary and the locations are at a distance of 0.577 from the origin in both directions. The above findings were verified through experimentation and findings were authenticated. For numerical experimentation five engineering problems were identified and the numerical results of 9-noded element were compared to those obtained by using the same order of 25-noded quadratic Lagrangian elements, which are considered as standard. Then root mean square errors are plotted with respect to various locations within the elements as well as the boundaries and conclusions were drawn. After numerical verification it is noted that in a 9-noded element, origin and locations at a distance of 0.577 from origin in both directions are the best sampling points for the stresses. It was also noted that stresses calculated within line at boundary enclosed by 0.577 midpoints are also very good and the error found is very less. When sampling points move away from these points, then it causes line zone error to increase rapidly. Thus, it is established that there are unique points at boundary of element where stresses are accurate, which can be utilized in solving various engineering problems and are also useful in shape optimizations.

Keywords: Finite element, Lagrangian, optimal stress location, serendipity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 634
3487 Teager-Huang Analysis Applied to Sonar Target Recognition

Authors: J.-C. Cexus, A.O. Boudraa

Abstract:

In this paper, a new approach for target recognition based on the Empirical mode decomposition (EMD) algorithm of Huang etal. [11] and the energy tracking operator of Teager [13]-[14] is introduced. The conjunction of these two methods is called Teager-Huang analysis. This approach is well suited for nonstationary signals analysis. The impulse response (IR) of target is first band pass filtered into subsignals (components) called Intrinsic mode functions (IMFs) with well defined Instantaneous frequency (IF) and Instantaneous amplitude (IA). Each IMF is a zero-mean AM-FM component. In second step, the energy of each IMF is tracked using the Teager energy operator (TEO). IF and IA, useful to describe the time-varying characteristics of the signal, are estimated using the Energy separation algorithm (ESA) algorithm of Maragos et al .[16]-[17]. In third step, a set of features such as skewness and kurtosis are extracted from the IF, IA and IMF energy functions. The Teager-Huang analysis is tested on set of synthetic IRs of Sonar targets with different physical characteristics (density, velocity, shape,? ). PCA is first applied to features to discriminate between manufactured and natural targets. The manufactured patterns are classified into spheres and cylinders. One hundred percent of correct recognition is achieved with twenty three echoes where sixteen IRs, used for training, are free noise and seven IRs, used for testing phase, are corrupted with white Gaussian noise.

Keywords: Target recognition, Empirical mode decomposition, Teager-Kaiser energy operator, Features extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2283
3486 Error Detection and Correction for Onboard Satellite Computers Using Hamming Code

Authors: Rafsan Al Mamun, Md. Motaharul Islam, Rabana Tajrin, Nabiha Noor, Shafinaz Qader

Abstract:

In an attempt to enrich the lives of billions of people by providing proper information, security and a way of communicating with others, the need for efficient and improved satellites is constantly growing. Thus, there is an increasing demand for better error detection and correction (EDAC) schemes, which are capable of protecting the data onboard the satellites. The paper is aimed towards detecting and correcting such errors using a special algorithm called the Hamming Code, which uses the concept of parity and parity bits to prevent single-bit errors onboard a satellite in Low Earth Orbit. This paper focuses on the study of Low Earth Orbit satellites and the process of generating the Hamming Code matrix to be used for EDAC using computer programs. The most effective version of Hamming Code generated was the Hamming (16, 11, 4) version using MATLAB, and the paper compares this particular scheme with other EDAC mechanisms, including other versions of Hamming Codes and Cyclic Redundancy Check (CRC), and the limitations of this scheme. This particular version of the Hamming Code guarantees single-bit error corrections as well as double-bit error detections. Furthermore, this version of Hamming Code has proved to be fast with a checking time of 5.669 nanoseconds, that has a relatively higher code rate and lower bit overhead compared to the other versions and can detect a greater percentage of errors per length of code than other EDAC schemes with similar capabilities. In conclusion, with the proper implementation of the system, it is quite possible to ensure a relatively uncorrupted satellite storage system.

Keywords: Bit-flips, Hamming code, low earth orbit, parity bits, satellite, single error upset.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 913
3485 Requirements Driven Multiple View Paradigm for Developing Security Architecture

Authors: K. Chandra Sekaran

Abstract:

This paper describes a paradigmatic approach to develop architecture of secure systems by describing the requirements from four different points of view: that of the owner, the administrator, the user, and the network. Deriving requirements and developing architecture implies the joint elicitation and describing the problem and the structure of the solution. The view points proposed in this paper are those we consider as requirements towards their contributions as major parties in the design, implementation, usage and maintenance of secure systems. The dramatic growth of the technology of Internet and the applications deployed in World Wide Web have lead to the situation where the security has become a very important concern in the development of secure systems. Many security approaches are currently being used in organizations. In spite of the widespread use of many different security solutions, the security remains a problem. It is argued that the approach that is described in this paper for the development of secure architecture is practical by all means. The models representing these multiple points of view are termed the requirements model (views of owner and administrator) and the operations model (views of user and network). In this paper, this multiple view paradigm is explained by first describing the specific requirements and or characteristics of secure systems (particularly in the domain of networks) and the secure architecture / system development methodology.

Keywords: Multiple view paradigms, requirements model, operations model, secure system, owner, administrator, user, network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1371
3484 Evaluation of the Internal Quality for Pineapple Based on the Spectroscopy Approach and Neural Network

Authors: Nonlapun Meenil, Pisitpong Intarapong, Thitima Wongsheree, Pranchalee Samanpiboon

Abstract:

In Thailand, once pineapples are harvested, they must be classified into two classes based on their sweetness: sweet and unsweet. This paper has studied and developed the assessment of internal quality of pineapples using a low-cost compact spectroscopy sensor according to the spectroscopy approach and Neural Network (NN). During the experiments, Batavia pineapples were utilized, generating 100 samples. The extracted pineapple juice of each sample was used to determine the Soluble Solid Content (SSC) labeling into sweet and unsweet classes. In terms of experimental equipment, the sensor cover was specifically designed to install the sensor and light source to read the reflectance at a five mm depth from pineapple flesh. By using a spectroscopy sensor, data on visible and near-infrared reflectance (Vis-NIR) were collected. The NN was used to classify the pineapple classes. Before the classification step, the preprocessing methods, which are class balancing, data shuffling, and standardization, were applied. The 510 nm and 900 nm reflectance values of the middle parts of pineapples were used as features of the NN. With the sequential model and ReLU activation function, 100% accuracy of the training set and 76.67% accuracy of the test set were achieved. According to the abovementioned information, using a low-cost compact spectroscopy sensor has achieved favorable results in classifying the sweetness of the two classes of pineapples.

Keywords: Spectroscopy, soluble solid content, pineapple, neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 121
3483 A Case Study on the Numerical-Probability Approach for Deep Excavation Analysis

Authors: Komeil Valipourian

Abstract:

Urban advances and the growing need for developing infrastructures has increased the importance of deep excavations. In this study, after the introducing probability analysis as an important issue, an attempt has been made to apply it for the deep excavation project of Bangkok’s Metro as a case study. For this, the numerical probability model has been developed based on the Finite Difference Method and Monte Carlo sampling approach. The results indicate that disregarding the issue of probability in this project will result in an inappropriate design of the retaining structure. Therefore, probabilistic redesign of the support is proposed and carried out as one of the applications of probability analysis. A 50% reduction in the flexural strength of the structure increases the failure probability just by 8% in the allowable range and helps improve economic conditions, while maintaining mechanical efficiency. With regard to the lack of efficient design in most deep excavations, by considering geometrical and geotechnical variability, an attempt was made to develop an optimum practical design standard for deep excavations based on failure probability. On this basis, a practical relationship is presented for estimating the maximum allowable horizontal displacement, which can help improve design conditions without developing the probability analysis.

Keywords: Numerical probability modeling, deep excavation, allowable maximum displacement, finite difference method, FDM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 692
3482 Computer-Aided Classification of Liver Lesions Using Contrasting Features Difference

Authors: Hussein Alahmer, Amr Ahmed

Abstract:

Liver cancer is one of the common diseases that cause the death. Early detection is important to diagnose and reduce the incidence of death. Improvements in medical imaging and image processing techniques have significantly enhanced interpretation of medical images. Computer-Aided Diagnosis (CAD) systems based on these techniques play a vital role in the early detection of liver disease and hence reduce liver cancer death rate.  This paper presents an automated CAD system consists of three stages; firstly, automatic liver segmentation and lesion’s detection. Secondly, extracting features. Finally, classifying liver lesions into benign and malignant by using the novel contrasting feature-difference approach. Several types of intensity, texture features are extracted from both; the lesion area and its surrounding normal liver tissue. The difference between the features of both areas is then used as the new lesion descriptors. Machine learning classifiers are then trained on the new descriptors to automatically classify liver lesions into benign or malignant. The experimental results show promising improvements. Moreover, the proposed approach can overcome the problems of varying ranges of intensity and textures between patients, demographics, and imaging devices and settings.

Keywords: CAD system, difference of feature, Fuzzy c means, Liver segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1421
3481 Invasion of Pectinatella magnifica in Freshwater Resources of the Czech Republic

Authors: J. Pazourek, K. Šmejkal, P. Kollár, J. Rajchard, J. Šinko, Z. Balounová, E. Vlková, H. Salmonová

Abstract:

Pectinatella magnifica (Leidy, 1851) is an invasive freshwater animal that lives in colonies. A colony of Pectinatella magnifica (a gelatinous blob) can be up to several feet in diameter large and under favorable conditions it exhibits an extreme growth rate. Recently European countries around rivers of Elbe, Oder, Danube, Rhine and Vltava have confirmed invasion of Pectinatella magnifica, including freshwater reservoirs in South Bohemia (Czech Republic). Our project (Czech Science Foundation, GAČR P503/12/0337) is focused onto biology and chemistry of Pectinatella magnifica. We monitor the organism occurrence in selected South Bohemia ponds and sandpits during the last years, collecting information about physical properties of surrounding water, and sampling the colonies for various analyses (classification, maps of secondary metabolites, toxicity tests). Because the gelatinous matrix is during the colony lifetime also a host for algae, bacteria and cyanobacteria (co-habitants), in this contribution, we also applied a high performance liquid chromatography (HPLC) method for determination of potentially present cyanobacterial toxins (microcystin-LR, microcystin-RR, nodularin). Results from the last 3-year monitoring show that these toxins are under limit of detection (LOD), so that they do not represent a danger yet. The final goal of our study is to assess toxicity risks related to fresh water resources invaded by Pectinatella magnifica, and to understand the process of invasion, which can enable to control it.

Keywords: Cyanobacteria, freshwater resources, Pectinatella magnifica invasion, toxicity monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1877
3480 A Modularized Design for Multi-Drivers Off-Road Vehicle Driving-Line and its Performance Assessment

Authors: Yi Jianjun, Sun Yingce, Hu Diqing, Li Chenggang

Abstract:

Modularized design approach can facilitate the modeling of complex systems and support behavior analysis and simulation in an iterative and thus complex engineering process, by using encapsulated submodels of components and of their interfaces. Therefore it can improve the design efficiency and simplify the solving complicated problem. Multi-drivers off-road vehicle is comparatively complicated. Driving-line is an important core part to a vehicle; it has a significant contribution to the performance of a vehicle. Multi-driver off-road vehicles have complex driving-line, so its performance is heavily dependent on the driving-line. A typical off-road vehicle-s driving-line system consists of torque converter, transmission, transfer case and driving-axles, which transfer the power, generated by the engine and distribute it effectively to the driving wheels according to the road condition. According to its main function, this paper puts forward a modularized approach for designing and evaluation of vehicle-s driving-line. It can be used to effectively estimate the performance of driving-line during concept design stage. Through appropriate analysis and assessment method, an optimal design can be reached. This method has been applied to the practical vehicle design, it can improve the design efficiency and is convenient to assess and validate the performance of a vehicle, especially of multi-drivers off-road vehicle.

Keywords: Heavy-loaded Off-road Vehicle, Power Driving-line, Modularized Design, Performance Assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1849
3479 Strategic Mine Planning: A SWOT Analysis Applied to KOV Open Pit Mine in the Democratic Republic of Congo

Authors: Patrick May Mukonki

Abstract:

KOV pit (Kamoto Oliveira Virgule) is located 10 km from Kolwezi town, one of the mineral rich town in the Lualaba province of the Democratic Republic of Congo. The KOV pit is currently operating under the Katanga Mining Limited (KML), a Glencore-Gecamines (a State Owned Company) join venture. Recently, the mine optimization process provided a life of mine of approximately 10 years withnice pushbacks using the Datamine NPV Scheduler software. In previous KOV pit studies, we recently outlined the impact of the accuracy of the geological information on a long-term mine plan for a big copper mine such as KOV pit. The approach taken, discussed three main scenarios and outlined some weaknesses on the geological information side, and now, in this paper that we are going to develop here, we are going to highlight, as an overview, those weaknesses, strengths and opportunities, in a global SWOT analysis. The approach we are taking here is essentially descriptive in terms of steps taken to optimize KOV pit and, at every step, we categorized the challenges we faced to have a better tradeoff between what we called strengths and what we called weaknesses. The same logic is applied in terms of the opportunities and threats. The SWOT analysis conducted in this paper demonstrates that, despite a general poor ore body definition, and very rude ground water conditions, there is room for improvement for such high grade ore body.

Keywords: Mine planning, mine optimization, mine scheduling, SWOT analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1589
3478 Optimal Image Representation for Linear Canonical Transform Multiplexing

Authors: Navdeep Goel, Salvador Gabarda

Abstract:

Digital images are widely used in computer applications. To store or transmit the uncompressed images requires considerable storage capacity and transmission bandwidth. Image compression is a means to perform transmission or storage of visual data in the most economical way. This paper explains about how images can be encoded to be transmitted in a multiplexing time-frequency domain channel. Multiplexing involves packing signals together whose representations are compact in the working domain. In order to optimize transmission resources each 4 × 4 pixel block of the image is transformed by a suitable polynomial approximation, into a minimal number of coefficients. Less than 4 × 4 coefficients in one block spares a significant amount of transmitted information, but some information is lost. Different approximations for image transformation have been evaluated as polynomial representation (Vandermonde matrix), least squares + gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev polynomials or singular value decomposition (SVD). Results have been compared in terms of nominal compression rate (NCR), compression ratio (CR) and peak signal-to-noise ratio (PSNR) in order to minimize the error function defined as the difference between the original pixel gray levels and the approximated polynomial output. Polynomial coefficients have been later encoded and handled for generating chirps in a target rate of about two chirps per 4 × 4 pixel block and then submitted to a transmission multiplexing operation in the time-frequency domain.

Keywords: Chirp signals, Image multiplexing, Image transformation, Linear canonical transform, Polynomial approximation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2130
3477 Integrated Modeling Approach for Energy Planning and Climate Change Mitigation Assessment in the State of Florida

Authors: Kuntal Thakkar, Chaouki Ghenai, Ahmed Hachicha

Abstract:

An integrated modeling approach was used in this study for energy planning and climate change mitigation assessment. The main objective of this study was to develop various green-house gas (GHG) mitigations scenarios in the energy demand and supply sectors for the state of Florida. The Long range energy alternative planning (LEAP) model was used in this study to examine the energy alternative and GHG emissions reduction scenarios for short and long term (2010-2050). One of the energy analysis and GHG mitigation scenarios was developed by taking into account the available renewable energy resources potential for power generation in the state of Florida. This will help to compare and analyze the GHG reduction measure against “Business As Usual” and ‘State of Florida Policy” scenarios. Two master scenarios: “Electrification” and “Energy efficiency and Lifestyle” were developed through combination of various mitigation scenarios: technological changes and energy efficiency and conservation. The results show a net reduction of the energy demand and GHG emissions by adopting these two energy scenarios compared to the business as usual.

Keywords: Integrated modeling, energy planning, climate change mitigation assessment, greenhouse gas emissions, renewable energy, energy efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1784
3476 Multidimensional Compromise Optimization for Development Ranking of the Gulf Cooperation Council Countries and Turkey

Authors: C. Ardil

Abstract:

In this research, a multidimensional  compromise optimization method is proposed for multidimensional decision making analysis in the development ranking of the Gulf Cooperation Council Countries and Turkey. The proposed approach presents ranking solutions resulting from different multicriteria decision analyses, which yield different ranking orders for the same ranking problem, consisting of a set of alternatives in terms of numerous competing criteria when they are applied with the same numerical data. The multiobjective optimization decision making problem is considered in three sequential steps. In the first step, five different criteria related to the development ranking are gathered from the research field. In the second step, identified evaluation criteria are, objectively, weighted using standard deviation procedure. In the third step, a country selection problem is illustrated with a numerical example as an application of the proposed multidimensional  compromise optimization model. Finally, multidimensional  compromise optimization approach is applied to rank the Gulf Cooperation Council Countries and Turkey. 

Keywords: Standard deviation, performance evaluation, multicriteria decision making, multidimensional compromise optimization, vector normalization, multicriteria decision making, multicriteria analysis, multidimensional decision analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 811
3475 Microservices-Based Provisioning and Control of Network Services for Heterogeneous Networks

Authors: Shameemraj M. Nadaf, Sipra Behera, Hemant K. Rath, Garima Mishra, Raja Mukhopadhyay, Sumanta Patro

Abstract:

Microservices architecture has been widely embraced for rapid, frequent, and reliable delivery of complex applications. It enables organizations to evolve their technology stack in various domains. Today, the networking domain is flooded with plethora of devices and software solutions which address different functionalities ranging from elementary operations, viz., switching, routing, firewall etc., to complex analytics and insights based intelligent services. In this paper, we attempt to bring in the microservices based approach for agile and adaptive delivery of network services for any underlying networking technology. We discuss the life cycle management of each individual microservice and a distributed control approach with emphasis for dynamic provisioning, management, and orchestration in an automated fashion which can provide seamless operations in large scale networks. We have conducted validations of the system in lab testbed comprising of Traditional/Legacy and Software Defined Wireless Local Area networks.

Keywords: Microservices architecture, software defined wireless networks, traditional wireless networks, automation, orchestration, intelligent networks, network analytics, seamless management, single pane control, fine-grain control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 895
3474 Comparison of Number of Waves Surfed and Duration Using Global Positioning System and Inertial Sensors

Authors: J. Madureira, R. Lagido, I. Sousa

Abstract:

Surf is an increasingly popular sport and its performance evaluation is often qualitative. This work aims at using a smartphone to collect and analyze the GPS and inertial sensors data in order to obtain quantitative metrics of the surfing performance. Two approaches are compared for detection of wave rides, computing the number of waves rode in a surfing session, the starting time of each wave and its duration. The first approach is based on computing the velocity from the Global Positioning System (GPS) signal and finding the velocity thresholds that allow identifying the start and end of each wave ride. The second approach adds information from the Inertial Measurement Unit (IMU) of the smartphone, to the velocity thresholds obtained from the GPS unit, to determine the start and end of each wave ride. The two methods were evaluated using GPS and IMU data from two surfing sessions and validated with similar metrics extracted from video data collected from the beach. The second method, combining GPS and IMU data, was found to be more accurate in determining the number of waves, start time and duration. This paper shows that it is feasible to use smartphones for quantification of performance metrics during surfing. In particular, detection of the waves rode and their duration can be accurately determined using the smartphone GPS and IMU. 

Keywords: Inertial Measurement Unit (IMU), Global Positioning System (GPS), smartphone, surfing performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656
3473 Latent Semantic Inference for Agriculture FAQ Retrieval

Authors: Dawei Wang, Rujing Wang, Ying Li, Baozi Wei

Abstract:

FAQ system can make user find answer to the problem that puzzles them. But now the research on Chinese FAQ system is still on the theoretical stage. This paper presents an approach to semantic inference for FAQ mining. To enhance the efficiency, a small pool of the candidate question-answering pairs retrieved from the system for the follow-up work according to the concept of the agriculture domain extracted from user input .Input queries or questions are converted into four parts, the question word segment (QWS), the verb segment (VS), the concept of agricultural areas segment (CS), the auxiliary segment (AS). A semantic matching method is presented to estimate the similarity between the semantic segments of the query and the questions in the pool of the candidate. A thesaurus constructed from the HowNet, a Chinese knowledge base, is adopted for word similarity measure in the matcher. The questions are classified into eleven intension categories using predefined question stemming keywords. For FAQ mining, given a query, the question part and answer part in an FAQ question-answer pair is matched with the input query, respectively. Finally, the probabilities estimated from these two parts are integrated and used to choose the most likely answer for the input query. These approaches are experimented on an agriculture FAQ system. Experimental results indicate that the proposed approach outperformed the FAQ-Finder system in agriculture FAQ retrieval.

Keywords: FAQ, Semantic Inference, Ontology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1379
3472 Self-Adaptive Differential Evolution Based Power Economic Dispatch of Generators with Valve-Point Effects and Multiple Fuel Options

Authors: R.Balamurugan, S.Subramanian

Abstract:

This paper presents the solution of power economic dispatch (PED) problem of generating units with valve point effects and multiple fuel options using Self-Adaptive Differential Evolution (SDE) algorithm. The global optimal solution by mathematical approaches becomes difficult for the realistic PED problem in power systems. The Differential Evolution (DE) algorithm is found to be a powerful evolutionary algorithm for global optimization in many real problems. In this paper the key parameters of control in DE algorithm such as the crossover constant CR and weight applied to random differential F are self-adapted. The PED problem formulation takes into consideration of nonsmooth fuel cost function due to valve point effects and multi fuel options of generator. The proposed approach has been examined and tested with the numerical results of PED problems with thirteen-generation units including valve-point effects, ten-generation units with multiple fuel options neglecting valve-point effects and ten-generation units including valve-point effects and multiple fuel options. The test results are promising and show the effectiveness of proposed approach for solving PED problems.

Keywords: Multiple fuels, power economic dispatch, selfadaptivedifferential evolution and valve-point effects.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1896
3471 Analysis of Noodle Production Process at Yan Hu Food Manufacturing: Basis for Production Improvement

Authors: Rhadinia Tayag-Relanes, Felina C. Young

Abstract:

This study was conducted to analyze the noodle production process at Yan Hu Food Manufacturing for the basis of production improvement. The study utilized the Plan, Do, Check, Act (PDCA) approach and record review in the gathering of data for the calendar year 2019, specifically from August to October, focusing on the noodle products miki, canton, and misua. A causal-comparative research design was employed to establish cause-effect relationships among the variables, using descriptive statistics and correlation to compute the data gathered. The findings indicate that miki, canton, and misua production have distinct cycle times and production outputs in every set of its production processes, as well as varying levels of wastage. The company has not yet established a formal allowable rejection rate for wastage; instead, this paper used a 1% wastage limit. We recommended the following: machines used for each process of the noodle product must be consistently maintained and monitored; an assessment of all the production operators should be conducted by assessing their performance statistically based on the output and the machine performance; a root cause analysis must be conducted to identify solutions to production issues; and, an improved recording system for input and output of the production process of each noodle product should be established to eliminate the poor recording of data.

Keywords: Production, continuous improvement, process, operations, Plan, Do, Check, Act approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31
3470 Software Product Quality Evaluation Model with Multiple Criteria Decision Making Analysis

Authors: C. Ardil

Abstract:

This paper presents a software product quality evaluation model based on the ISO/IEC 25010 quality model. The evaluation characteristics and sub characteristics were identified from the ISO/IEC 25010 quality model. The multidimensional structure of the quality model is based on characteristics such as functional suitability, performance efficiency, compatibility, usability, reliability, security, maintainability, and portability, and associated sub characteristics. Random numbers are generated to establish the decision maker’s importance weights for each sub characteristics. Also, random numbers are generated to establish the decision matrix of the decision maker’s final scores for each software product against each sub characteristics. Thus, objective criteria importance weights and index scores for datasets were obtained from the random numbers. In the proposed model, five different software product quality evaluation datasets under three different weight vectors were applied to multiple criteria decision analysis method, preference analysis for reference ideal solution (PARIS) for comparison, and sensitivity analysis procedure. This study contributes to provide a better understanding of the application of MCDMA methods and ISO/IEC 25010 quality model guidelines in software product quality evaluation process.

Keywords: ISO/IEC 25010 quality model, multiple criteria decisions making, multiple criteria decision making analysis, MCDMA, PARIS, Software Product Quality Evaluation Model, Software Product Quality Evaluation, Software Evaluation, Software Selection, Software

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 449
3469 Flow Acoustics in Solid-Fluid Structures

Authors: Morten Willatzen, Mikhail Vladimirovich Deryabin

Abstract:

The governing two-dimensional equations of a heterogeneous material composed of a fluid (allowed to flow in the absence of acoustic excitations) and a crystalline piezoelectric cubic solid stacked one-dimensionally (along the z direction) are derived and special emphasis is given to the discussion of acoustic group velocity for the structure as a function of the wavenumber component perpendicular to the stacking direction (being the x axis). Variations in physical parameters with y are neglected assuming infinite material homogeneity along the y direction and the flow velocity is assumed to be directed along the x direction. In the first part of the paper, the governing set of differential equations are derived as well as the imposed boundary conditions. Solutions are provided using Hamilton-s equations for the wavenumber vs. frequency as a function of the number and thickness of solid layers and fluid layers in cases with and without flow (also the case of a position-dependent flow in the fluid layer is considered). In the first part of the paper, emphasis is given to the small-frequency case. Boundary conditions at the bottom and top parts of the full structure are left unspecified in the general solution but examples are provided for the case where these are subject to rigid-wall conditions (Neumann boundary conditions in the acoustic pressure). In the second part of the paper, emphasis is given to the general case of larger frequencies and wavenumber-frequency bandstructure formation. A wavenumber condition for an arbitrary set of consecutive solid and fluid layers, involving four propagating waves in each solid region, is obtained again using the monodromy matrix method. Case examples are finally discussed.

Keywords: Flow, acoustics, solid-fluid structures, periodicity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1589
3468 Sleep Scheduling Schemes Based on Location of Mobile User in Sensor-Cloud

Authors: N. Mahendran, R. Priya

Abstract:

The mobile cloud computing (MCC) with wireless sensor networks (WSNs) technology gets more attraction by research scholars because its combines the sensors data gathering ability with the cloud data processing capacity. This approach overcomes the limitation of data storage capacity and computational ability of sensor nodes. Finally, the stored data are sent to the mobile users when the user sends the request. The most of the integrated sensor-cloud schemes fail to observe the following criteria: 1) The mobile users request the specific data to the cloud based on their present location. 2) Power consumption since most of them are equipped with non-rechargeable batteries. Mostly, the sensors are deployed in hazardous and remote areas. This paper focuses on above observations and introduces an approach known as collaborative location-based sleep scheduling (CLSS) scheme. Both awake and asleep status of each sensor node is dynamically devised by schedulers and the scheduling is done purely based on the of mobile users’ current location; in this manner, large amount of energy consumption is minimized at WSN. CLSS work depends on two different methods; CLSS1 scheme provides lower energy consumption and CLSS2 provides the scalability and robustness of the integrated WSN.

Keywords: Sleep scheduling, mobile cloud computing, wireless sensor network, integration, location, network lifetime.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 976
3467 A Family Cars- Life Cycle Cost (LCC)-Oriented Hybrid Modelling Approach Combining ANN and CBR

Authors: Xiaochuan Chen, Jianguo Yang, Beizhi Li

Abstract:

Design for cost (DFC) is a method that reduces life cycle cost (LCC) from the angle of designers. Multiple domain features mapping (MDFM) methodology was given in DFC. Using MDFM, we can use design features to estimate the LCC. From the angle of DFC, the design features of family cars were obtained, such as all dimensions, engine power and emission volume. At the conceptual design stage, cars- LCC were estimated using back propagation (BP) artificial neural networks (ANN) method and case-based reasoning (CBR). Hamming space was used to measure the similarity among cases in CBR method. Levenberg-Marquardt (LM) algorithm and genetic algorithm (GA) were used in ANN. The differences of LCC estimation model between CBR and artificial neural networks (ANN) were provided. ANN and CBR separately each method has its shortcomings. By combining ANN and CBR improved results accuracy was obtained. Firstly, using ANN selected some design features that affect LCC. Then using LCC estimation results of ANN could raise the accuracy of LCC estimation in CBR method. Thirdly, using ANN estimate LCC errors and correct errors in CBR-s estimation results if the accuracy is not enough accurate. Finally, economically family cars and sport utility vehicle (SUV) was given as LCC estimation cases using this hybrid approach combining ANN and CBR.

Keywords: case-based reasoning, life cycle cost (LCC), artificialneural networks (ANN), family cars

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1960
3466 Continuous Plug Flow and Discrete Particle Phase Coupling Using Triangular Parcels

Authors: Anders Schou Simonsen, Thomas Condra, Kim Sørensen

Abstract:

Various processes are modelled using a discrete phase, where particles are seeded from a source. Such particles can represent liquid water droplets, which are affecting the continuous phase by exchanging thermal energy, momentum, species etc. Discrete phases are typically modelled using parcel, which represents a collection of particles, which share properties such as temperature, velocity etc. When coupling the phases, the exchange rates are integrated over the cell, in which the parcel is located. This can cause spikes and fluctuating exchange rates. This paper presents an alternative method of coupling a discrete and a continuous plug flow phase. This is done using triangular parcels, which span between nodes following the dynamics of single droplets. Thus, the triangular parcels are propagated using the corner nodes. At each time step, the exchange rates are spatially integrated over the surface of the triangular parcels, which yields a smooth continuous exchange rate to the continuous phase. The results shows that the method is more stable, converges slightly faster and yields smooth exchange rates compared with the steam tube approach. However, the computational requirements are about five times greater, so the applicability of the alternative method should be limited to processes, where the exchange rates are important. The overall balances of the exchanged properties did not change significantly using the new approach.

Keywords: CFD, coupling, discrete phase, parcel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 610
3465 On Combining Support Vector Machines and Fuzzy K-Means in Vision-based Precision Agriculture

Authors: A. Tellaeche, X. P. Burgos-Artizzu, G. Pajares, A. Ribeiro

Abstract:

One important objective in Precision Agriculture is to minimize the volume of herbicides that are applied to the fields through the use of site-specific weed management systems. In order to reach this goal, two major factors need to be considered: 1) the similar spectral signature, shape and texture between weeds and crops; 2) the irregular distribution of the weeds within the crop's field. This paper outlines an automatic computer vision system for the detection and differential spraying of Avena sterilis, a noxious weed growing in cereal crops. The proposed system involves two processes: image segmentation and decision making. Image segmentation combines basic suitable image processing techniques in order to extract cells from the image as the low level units. Each cell is described by two area-based attributes measuring the relations among the crops and the weeds. From these attributes, a hybrid decision making approach determines if a cell must be or not sprayed. The hybrid approach uses the Support Vector Machines and the Fuzzy k-Means methods, combined through the fuzzy aggregation theory. This makes the main finding of this paper. The method performance is compared against other available strategies.

Keywords: Fuzzy k-Means, Precision agriculture, SupportVectors Machines, Weed detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1779
3464 Speciation, Preconcentration, and Determination of Iron(II) and (III) Using 1,10-Phenanthroline Immobilized on Alumina-Coated Magnetite Nanoparticles as a Solid Phase Extraction Sorbent in Pharmaceutical Products

Authors: Hossein Tavallali, Mohammad Ali Karimi, Gohar Deilamy-Rad

Abstract:

The proposed method for speciation, preconcentration and determination of Fe(II) and Fe(III) in pharmaceutical products was developed using of alumina-coated magnetite nanoparticles (Fe3O4/Al2O3 NPs) as solid phase extraction (SPE) sorbent in magnetic mixed hemimicell solid phase extraction (MMHSPE) technique followed by flame atomic absorption spectrometry analysis. The procedure is based on complexation of Fe(II) with 1, 10-phenanthroline (OP) as complexing reagent for Fe(II) that immobilized on the modified Fe3O4/Al2O3 NPs. The extraction and concentration process for pharmaceutical sample was carried out in a single step by mixing the extraction solvent, magnetic adsorbents under ultrasonic action. Then, the adsorbents were isolated from the complicated matrix easily with an external magnetic field. Fe(III) ions determined after facility reduced to Fe(II) by added a proper reduction agent to sample solutions. Compared with traditional methods, the MMHSPE method simplified the operation procedure and reduced the analysis time. Various influencing parameters on the speciation and preconcentration of trace iron, such as pH, sample volume, amount of sorbent, type and concentration of eluent, were studied. Under the optimized operating conditions, the preconcentration factor of the modified nano magnetite for Fe(II) 167 sample was obtained. The detection limits and linear range of this method for iron were 1.0 and 9.0 - 175 ng.mL−1, respectively. Also the relative standard deviation for five replicate determinations of 30.00 ng.mL-1 Fe2+ was 2.3%.

Keywords: Alumina-coated magnetite nanoparticles, magnetic mixed hemimicell solid-phase extraction, Fe(ΙΙ) and Fe(ΙΙΙ), pharmaceutical sample.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1209