Search results for: multi-objective linear programming
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4115

Search results for: multi-objective linear programming

2555 Critical Parameters of a Square-Well Fluid

Authors: Hamza Javar Magnier, Leslie V. Woodcock

Abstract:

We report extensive molecular dynamics (MD) computational investigations into the thermodynamic description of supercritical properties for a model fluid that is the simplest realistic representation of atoms or molecules. The pair potential is a hard-sphere repulsion of diameter σ with a very short attraction of length λσ. When λ = 1.005 the range is so short that the model atoms are referred to as “adhesive spheres”. Molecular dimers, trimers …etc. up to large clusters, or droplets, of many adhesive-sphere atoms are unambiguously defined. This then defines percolation transitions at the molecular level that bound the existence of gas and liquid phases at supercritical temperatures, and which define the existence of a supercritical mesophase. Both liquid and gas phases are seen to terminate at the loci of percolation transitions, and below a second characteristic temperature (Tc2) are separated by the supercritical mesophase. An analysis of the distribution of clusters in gas, meso- and liquid phases confirms the colloidal nature of this mesophase. The general phase behaviour is compared with both experimental properties of the water-steam supercritical region and also with formally exact cluster theory of Mayer and Mayer. Both are found to be consistent with the present findings that in this system the supercritical mesophase narrows in density with increasing T > Tc and terminates at a higher Tc2 at a confluence of the primary percolation loci. The expended plot of the MD data points in the mesophase of 7 critical and supercritical isotherms in highlight this narrowing in density of the linear-slope region of the mesophase as temperature is increased above the critical. This linearity in the mesophase implies the existence of a linear combination rule between gas and liquid which is an extension of the Lever rule in the subcritical region, and can be used to obtain critical parameters without resorting to experimental data in the two-phase region. Using this combination rule, the calculated critical parameters Tc = 0.2007 and Pc = 0.0278 are found be agree with the values found by of Largo and coworkers. The properties of this supercritical mesophase are shown to be consistent with an alternative description of the phenomenon of critical opalescence seen in the supercritical region of both molecular and colloidal-protein supercritical fluids.

Keywords: critical opalescence, supercritical, square-well, percolation transition, critical parameters.

Procedia PDF Downloads 521
2554 Microarray Data Visualization and Preprocessing Using R and Bioconductor

Authors: Ruchi Yadav, Shivani Pandey, Prachi Srivastava

Abstract:

Microarrays provide a rich source of data on the molecular working of cells. Each microarray reports on the abundance of tens of thousands of mRNAs. Virtually every human disease is being studied using microarrays with the hope of finding the molecular mechanisms of disease. Bioinformatics analysis plays an important part of processing the information embedded in large-scale expression profiling studies and for laying the foundation for biological interpretation. A basic, yet challenging task in the analysis of microarray gene expression data is the identification of changes in gene expression that are associated with particular biological conditions. Careful statistical design and analysis are essential to improve the efficiency and reliability of microarray experiments throughout the data acquisition and analysis process. One of the most popular platforms for microarray analysis is Bioconductor, an open source and open development software project based on the R programming language. This paper describes specific procedures for conducting quality assessment, visualization and preprocessing of Affymetrix Gene Chip and also details the different bioconductor packages used to analyze affymetrix microarray data and describe the analysis and outcome of each plots.

Keywords: microarray analysis, R language, affymetrix visualization, bioconductor

Procedia PDF Downloads 480
2553 The Effect of Leadership Styles on Employees’ Organizational Commitment at Ambo Woreda Public Organizations, Oromia Regional State, Ethiopia

Authors: Mengistu Tulu Balcha, Endale Gadisa Motuma

Abstract:

The purpose of this study was to assess the effect of leadership styles on employees’ organizational commitments in Ambo Woreda public organizations. The study was guided by a Descriptive survey and correlation research design of the quantitative method. By using simple random sampling techniques, 80 participants of employees and by purposive sampling technique, 32 leaders were involved in research from five purposely selected Woreda public organizations without a non-response rate. Two separate instruments adopted from previous studies, namely the multifactor leadership questionnaire (MLQ), which has 36 items and the Organizational Commitment Questionnaire (OCQ), which has 12 items, were used as a data instrument tool. These items were rated by using a five-point Likert-scale. The survey data was processed by using an SPSS (version 27). Descriptive statistics to calculate mean and standard deviations of leaders’ and employees’ responses to leadership styles dominantly practiced in order to determine their perceptions, MLQ of leaders’ and employees’ responses (independent sample), and multiple linear regressions were used to calculate the effect of leadership styles on organizational commitment. The findings of the study show that the leadership style dominantly practiced in Ambo Woreda public organizations was more transactional than transformational and followed by laissez-faire. The level of EOC was ranked as continuance commitment and had the highest mean score, followed by normative commitment and then affective commitment. There is a strong, positive and significant relationship between leadership style dimensions and employees’ organizational commitment. Leadership styles were found statistically significant to predict employee commitment and there was a significant linear relationship between independent variables and dependent variables. Out of the three leadership variables, the transactional leadership style has the highest contribution, followed by the transformational leadership style, whereas the laissez-faire leadership style has the least contribution in predicting employees’ organizational commitment. Finally, the researcher forwarded possible recommendations for Ambo Woreda public organizational leaders and employees to work on improving leadership styles and employees’ commitment collaboratively.

Keywords: organizations, employee, relations, commitments, style

Procedia PDF Downloads 26
2552 Development of Basic Patternmaking Using Parametric Modelling and AutoLISP

Authors: Haziyah Hussin, Syazwan Abdul Samad, Rosnani Jusoh

Abstract:

This study is aimed towards the automisation of basic patternmaking for traditional clothes for the purpose of mass production using AutoCAD to apply AutoLISP feature under software Hazi Attire. A standard dress form (industrial form) with the size of small (S), medium (M) and large (L) size is measured using full body scanning machine. Later, the pattern for the clothes is designed parametrically based on the measured dress form. Hazi Attire program is used within the framework of AutoCAD to generate the basic pattern of front bodice, back bodice, front skirt, back skirt and sleeve block (sloper). The generation of pattern is based on the parameters inputted by user, whereby in this study, the parameters were determined based on the measured size of dress form. The finalized pattern parameter shows that the pattern fit perfectly on the dress form. Since the pattern is generated almost instantly, these proved that using the AutoLISP programming, the manufacturing lead time for the mass production of the traditional clothes can be decreased.

Keywords: apparel, AutoLISP, Malay traditional clothes, pattern ganeration

Procedia PDF Downloads 256
2551 Lung Function, Urinary Heavy Metals And ITS Other Influencing Factors Among Community In Klang Valley

Authors: Ammar Amsyar Abdul Haddi, Mohd Hasni Jaafar

Abstract:

Heavy metals are elements naturally presented in the environment that can cause adverse effect to health. But not much literature was found on effects toward lung function, where impairment of lung function may lead to various lung diseases. The objective of the study is to explore the lung function impairment, urinary heavy metal level, and its associated factors among the community in Klang valley, Malaysia. Sampling was done in Kuala Lumpur suburb public and housing areas during community events throughout March 2019 till October 2019. respondents who gave the consent were given a questionnaire to answer and was proceeded with a lung function test. Urine samples were obtained at the end of the session and sent for Inductively coupled plasma mass spectrometry (ICP-MS) analysis for heavy metal cadmium (Cd) and lead (Pb) concentration. A total of 200 samples were analysed, and of all, 52% of respondents were male, Age ranging from 18 years old to 74 years old with a mean age of 38.44. Urinary samples show that 12% of the respondent (n=22) has Cd level above than average, and 1.5 % of the respondent (n=3) has urinary Pb at an above normal level. Bivariate analysis show that there was a positive correlation between urinary Cd and urinary Pb (r= 0.309; p<0.001). Furthermore, there was a negative correlation between urinary Cd level and full vital capacity (FVC) (r=-0.202, p=0.004), Force expiratory volume at 1 second (FEV1) (r = -0.225, p=0.001), and also with Force expiratory flow between 25-75% FVC (FEF25%-75%) (r= -0.187, p=0.008). however, urinary Pb did not show any association with FVC, FEV1, FEV1/FVC, or FEF25%-75%. Multiple linear regression analysis shows that urinary Cd remained significant and negatively affect FVC% (p=0.025) and FEV1% (p=0.004) achieved from the predicted value. On top of that, other factors such as education level (p=0.013) and duration of smoking(p=0.003) may influencing both urinary Cd and performance in lung function as well, suggesting Cd as a potential mediating factor between smoking and impairment of lung function. however, there was no interaction detected between heavy metal or other influencing factor in this study. In short, there is a negative linear relationship detected between urinary Cd and lung function, and urinary Cd is likely to affects lung function in a restrictive pattern. Since smoking is also an influencing factor for urinary Cd and lung function impairment, it is highly suggested that smokers should be screened for lung function and urinary Cd level in the future for early disease prevention.

Keywords: lung function, heavy metals, community

Procedia PDF Downloads 156
2550 The Estimation Method of Stress Distribution for Beam Structures Using the Terrestrial Laser Scanning

Authors: Sang Wook Park, Jun Su Park, Byung Kwan Oh, Yousok Kim, Hyo Seon Park

Abstract:

This study suggests the estimation method of stress distribution for the beam structures based on TLS (Terrestrial Laser Scanning). The main components of method are the creation of the lattices of raw data from TLS to satisfy the suitable condition and application of CSSI (Cubic Smoothing Spline Interpolation) for estimating stress distribution. Estimation of stress distribution for the structural member or the whole structure is one of the important factors for safety evaluation of the structure. Existing sensors which include ESG (Electric strain gauge) and LVDT (Linear Variable Differential Transformer) can be categorized as contact type sensor which should be installed on the structural members and also there are various limitations such as the need of separate space where the network cables are installed and the difficulty of access for sensor installation in real buildings. To overcome these problems inherent in the contact type sensors, TLS system of LiDAR (light detection and ranging), which can measure the displacement of a target in a long range without the influence of surrounding environment and also get the whole shape of the structure, has been applied to the field of structural health monitoring. The important characteristic of TLS measuring is a formation of point clouds which has many points including the local coordinate. Point clouds is not linear distribution but dispersed shape. Thus, to analyze point clouds, the interpolation is needed vitally. Through formation of averaged lattices and CSSI for the raw data, the method which can estimate the displacement of simple beam was developed. Also, the developed method can be extended to calculate the strain and finally applicable to estimate a stress distribution of a structural member. To verify the validity of the method, the loading test on a simple beam was conducted and TLS measured it. Through a comparison of the estimated stress and reference stress, the validity of the method is confirmed.

Keywords: structural healthcare monitoring, terrestrial laser scanning, estimation of stress distribution, coordinate transformation, cubic smoothing spline interpolation

Procedia PDF Downloads 433
2549 Evaluation of the Adsorption Adaptability of Activated Carbon Using Dispersion Force

Authors: Masao Fujisawa, Hirohito Ikeda, Tomonori Ohata, Miho Yukawa, Hatsumi Aki, Takayoshi Kimura

Abstract:

We attempted to predict adsorption coefficients by utilizing dispersion energies. We performed liquid-phase free energy calculations based on gas-phase geometries of organic compounds using the DFT and studied the relationship between the adsorption of organic compounds by activated carbon and dispersion energies of the organic compounds. A linear correlation between absorption coefficients and dispersion energies was observed.

Keywords: activated carbon, adsorption, prediction, dispersion energy

Procedia PDF Downloads 233
2548 Protection of Patients and Staff in External Beam Radiotherapy Using Linac in Kenya

Authors: Calvince Okome Odeny

Abstract:

There is a current action to increase radiotherapy services in Kenya. The National government of Kenya, in collaboration with the county governments, has embarked on building radiotherapy centers in all 47 regions of the country. As these new centers are established in Kenya, it has to be ensured that minimum radiation safety standards are in place prior to operation. For full implementation of this, it is imperative that more Research and training for regulators are done on radiation protection, and safety and national regulatory infrastructure is geared towards ensuring radiation protection and safety in all aspects of the use of external radiotherapy practices. The present work aims at reviewing the level of protection and safety for patients and staff during external beam radiotherapy using Linac in Kenya and provides relevant guidance to improve protection and safety. A retrospective evaluation was done to verify whether those occupationally exposed workers and patients are adequately protected from the harmful effect of radiation exposure during the treatment procedures using Linac. The project was experimental Research, also including an analysis of resource documents obtained from the literature and international organizations. The critical findings of the work revealed that the key elements of protection of occupationally exposed workers and patients include a comprehensive quality Management system governing all planned activities from siting, safety, and design of the Facility, construction, acceptance testing, commissioning, operation, and decommissioning of the Facility; Government empowering the Regulatory Authority to license Medical Linear facilities and to enforce the applicable regulations to ensure adequate protection; A comprehensive Radiation Protection and Safety program must be established to ensure adequate safety and protection of workers and patients during treatment planning and treatment delivery of patients and categories of staff associated with the Facility must be well educated and trained to perform professionally with a commitment to sound safety culture. Relevant recommendations from the findings are shared with the Medical Linear Accelerator facilities and the regulatory authority to provide guidance and continuous improvement of protection and safety to improve regulatory oversight.

Keywords: oncology, radiotherapy, protection, staff

Procedia PDF Downloads 75
2547 Modelling Asymmetric Magnetic Recording Heads with an Underlayer Using Superposition

Authors: Ammar Edress Mohamed, Mustafa Aziz, David Wright

Abstract:

This paper analyses and calculates the head fields of asymmetrical 2D magnetic recording heads when the soft-underlayer is present using the appropriate Green's function to derive the surface potential/field by utilising the surface potential for asymmetrical head without underlayer. The results follow closely the corners, while the gap region shows a linear behaviour for d/g < 0.5 compared with the calculated fields from finite-element.

Keywords: magnetic recording, finite elements, asymmetrical magnetic heads, superposition, Laplace's equation

Procedia PDF Downloads 392
2546 Age Estimation from Upper Anterior Teeth by Pulp/Tooth Ratio Using Peri-Apical X-Rays among Egyptians

Authors: Fatma Mohamed Magdy Badr El Dine, Amr Mohamed Abd Allah

Abstract:

Introduction: Age estimation of individuals is one of the crucial steps in forensic practice. Different traditional methods rely on the length of the diaphysis of long bones of limbs, epiphyseal-diaphyseal union, fusion of the primary ossification centers as well as dental eruption. However, there is a growing need for the development of precise and reliable methods to estimate age, especially in cases where dismembered corpses, burnt bodies, purified or fragmented parts are recovered. Teeth are the hardest and indestructible structure in the human body. In recent years, assessment of pulp/tooth area ratio, as an indirect quantification of secondary dentine deposition has received a considerable attention. However, scanty work has been done in Egypt in terms of applicability of pulp/tooth ratio for age estimation. Aim of the Work: The present work was designed to assess the Cameriere’s method for age estimation from pulp/tooth ratio of maxillary canines, central and lateral incisors among a sample from Egyptian population. In addition, to formulate regression equations to be used as population-based standards for age determination. Material and Methods: The present study was conducted on 270 peri-apical X-rays of maxillary canines, central and lateral incisors (collected from 131 males and 139 females aged between 19 and 52 years). The pulp and tooth areas were measured using the Adobe Photoshop software program and the pulp/tooth area ratio was computed. Linear regression equations were determined separately for canines, central and lateral incisors. Results: A significant correlation was recorded between the pulp/tooth area ratio and the chronological age. The linear regression analysis revealed a coefficient of determination (R² = 0.824 for canine, 0.588 for central incisor and 0.737 for lateral incisor teeth). Three regression equations were derived. Conclusion: As a conclusion, the pulp/tooth ratio is a useful technique for estimating age among Egyptians. Additionally, the regression equation derived from canines gave better result than the incisors.

Keywords: age determination, canines, central incisors, Egypt, lateral incisors, pulp/tooth ratio

Procedia PDF Downloads 184
2545 Coupling Time-Domain Analysis for Dynamic Positioning during S-Lay Installation

Authors: Sun Li-Ping, Zhu Jian-Xun, Liu Sheng-Nan

Abstract:

In order to study the performance of dynamic positioning system during S-lay operations, dynamic positioning system is simulated with the hull-stinger-pipe coupling effect. The roller of stinger is simulated by the generalized elastic contact theory. The stinger is composed of Morrison members. Force on pipe is calculated by lumped mass method. Time domain of fully coupled barge model is analyzed combining with PID controller, Kalman filter and allocation of thrust using Sequential Quadratic Programming method. It is also analyzed that the effect of hull wave frequency motion on pipe-stinger coupling force and dynamic positioning system. Besides, it is studied that how S-lay operations affect the dynamic positioning accuracy. The simulation results are proved to be available by checking pipe stress with API criterion. The effect of heave and yaw motion cannot be ignored on hull-stinger-pipe coupling force and dynamic positioning system. It is important to decrease the barge’s pitch motion and lay pipe in head sea in order to improve safety of the S-lay installation and dynamic positioning.

Keywords: S-lay operation, dynamic positioning, coupling motion, time domain, allocation of thrust

Procedia PDF Downloads 465
2544 Rigorous Photogrammetric Push-Broom Sensor Modeling for Lunar and Planetary Image Processing

Authors: Ahmed Elaksher, Islam Omar

Abstract:

Accurate geometric relation algorithms are imperative in Earth and planetary satellite and aerial image processing, particularly for high-resolution images that are used for topographic mapping. Most of these satellites carry push-broom sensors. These sensors are optical scanners equipped with linear arrays of CCDs. These sensors have been deployed on most EOSs. In addition, the LROC is equipped with two push NACs that provide 0.5 meter-scale panchromatic images over a 5 km swath of the Moon. The HiRISE carried by the MRO and the HRSC carried by MEX are examples of push-broom sensor that produces images of the surface of Mars. Sensor models developed in photogrammetry relate image space coordinates in two or more images with the 3D coordinates of ground features. Rigorous sensor models use the actual interior orientation parameters and exterior orientation parameters of the camera, unlike approximate models. In this research, we generate a generic push-broom sensor model to process imageries acquired through linear array cameras and investigate its performance, advantages, and disadvantages in generating topographic models for the Earth, Mars, and the Moon. We also compare and contrast the utilization, effectiveness, and applicability of available photogrammetric techniques and softcopies with the developed model. We start by defining an image reference coordinate system to unify image coordinates from all three arrays. The transformation from an image coordinate system to a reference coordinate system involves a translation and three rotations. For any image point within the linear array, its image reference coordinates, the coordinates of the exposure center of the array in the ground coordinate system at the imaging epoch (t), and the corresponding ground point coordinates are related through the collinearity condition that states that all these three points must be on the same line. The rotation angles for each CCD array at the epoch t are defined and included in the transformation model. The exterior orientation parameters of an image line, i.e., coordinates of exposure station and rotation angles, are computed by a polynomial interpolation function in time (t). The parameter (t) is the time at a certain epoch from a certain orbit position. Depending on the types of observations, coordinates, and parameters may be treated as knowns or unknowns differently in various situations. The unknown coefficients are determined in a bundle adjustment. The orientation process starts by extracting the sensor position and, orientation and raw images from the PDS. The parameters of each image line are then estimated and imported into the push-broom sensor model. We also define tie points between image pairs to aid the bundle adjustment model, determine the refined camera parameters, and generate highly accurate topographic maps. The model was tested on different satellite images such as IKONOS, QuickBird, and WorldView-2, HiRISE. It was found that the accuracy of our model is comparable to those of commercial and open-source software, the computational efficiency of the developed model is high, the model could be used in different environments with various sensors, and the implementation process is much more cost-and effort-consuming.

Keywords: photogrammetry, push-broom sensors, IKONOS, HiRISE, collinearity condition

Procedia PDF Downloads 63
2543 Failure Mechanism of Slip-Critical Connections on Curved Surface

Authors: Bae Doobyong, Yoo Jaejun, Park Ilgyu, Choi Seowon, Oh Chang Kook

Abstract:

Variation of slip coefficient in slip-critical connections of curved plates. This paper presents the results of analytical investigations of slip coefficients in slip-critical bolted connections of curved plates. It may depend on the contact stress distribution at interface and the flexibility of spliced plate. Non-linear FEM analyses have been made to simulate the behavior of bolted connections of curved plates with various radiuses of curvature and thicknesses.

Keywords: slip coefficient, curved plates, slip-critical bolted connection, radius of curvature

Procedia PDF Downloads 517
2542 A Study of Anthropometric Correlation between Upper and Lower Limb Dimensions in Sudanese Population

Authors: Altayeb Abdalla Ahmed

Abstract:

Skeletal phenotype is a product of a balanced interaction between genetics and environmental factors throughout different life stages. Therefore, interlimb proportions are variable between populations. Although interlimb proportion indices have been used in anthropology in assessing the influence of various environmental factors on limbs, an extensive literature review revealed that there is a paucity of published research assessing interlimb part correlations and possibility of reconstruction. Hence, this study aims to assess the relationships between upper and lower limb parts and develop regression formulae to reconstruct the parts from one another. The left upper arm length, ulnar length, wrist breadth, hand length, hand breadth, tibial length, bimalleolar breadth, foot length, and foot breadth of 376 right-handed subjects, comprising 187 males and 189 females (aged 25-35 years), were measured. Initially, the data were analyzed using basic univariate analysis and independent t-tests; then sex-specific simple and multiple linear regression models were used to estimate upper limb parts from lower limb parts and vice-versa. The results of this study indicated significant sexual dimorphism for all variables. The results indicated a significant correlation between the upper and lower limbs parts (p < 0.01). Linear and multiple (stepwise) regression equations were developed to reconstruct the limb parts in the presence of a single or multiple dimension(s) from the other limb. Multiple stepwise regression equations generated better reconstructions than simple equations. These results are significant in forensics as it can aid in identification of multiple isolated limb parts particularly during mass disasters and criminal dismemberment. Although a DNA analysis is the most reliable tool for identification, its usage has multiple limitations in undeveloped countries, e.g., cost, facility availability, and trained personnel. Furthermore, it has important implication in plastic and orthopedic reconstructive surgeries. This study is the only reported study assessing the correlation and prediction capabilities between many of the upper and lower dimensions. The present study demonstrates a significant correlation between the interlimb parts in both sexes, which indicates a possibility to reconstruction using regression equations.

Keywords: anthropometry, correlation, limb, Sudanese

Procedia PDF Downloads 295
2541 Mathematical and Numerical Analysis of a Nonlinear Cross Diffusion System

Authors: Hassan Al Salman

Abstract:

We consider a nonlinear parabolic cross diffusion model arising in applied mathematics. A fully practical piecewise linear finite element approximation of the model is studied. By using entropy-type inequalities and compactness arguments, existence of a global weak solution is proved. Providing further regularity of the solution of the model, some uniqueness results and error estimates are established. Finally, some numerical experiments are performed.

Keywords: cross diffusion model, entropy-type inequality, finite element approximation, numerical analysis

Procedia PDF Downloads 383
2540 Sharing Experience in Authentic Learning for Mobile Security

Authors: Kai Qian, Lixin Tao

Abstract:

Mobile devices such as smartphones are getting more and more popular in our daily lives. The security vulnerability and threat attacks become a very emerging and important research and education topic in computing security discipline. There is a need to have an innovative mobile security hands-on laboratory to provide students with real world relevant mobile threat analysis and protection experience. This paper presents an authentic teaching and learning mobile security approach with smartphone devices which covers most important mobile threats in most aspects of mobile security. Each lab focuses on one type of mobile threats, such as mobile messaging threat, and conveys the threat analysis and protection in multiple ways, including lectures and tutorials, multimedia or app-based demonstration for threats analysis, and mobile app development for threat protections. This authentic learning approach is affordable and easily-adoptable which immerse students in a real world relevant learning environment with real devices. This approach can also be applied to many other mobile related courses such as mobile Java programming, database, network, and any security relevant courses so that can learn concepts and principles better with the hands-on authentic learning experience.

Keywords: mobile computing, Android, network, security, labware

Procedia PDF Downloads 406
2539 Presenting Internals of Networks Using Bare Machine Technology

Authors: Joel Weymouth, Ramesh K. Karne, Alexander L. Wijesinha

Abstract:

Bare Machine Internet is part of the Bare Machine Computing (BMC) paradigm. It is used in programming application ns to run directly on a device. It is software that runs directly against the hardware using CPU, Memory, and I/O. The software application runs without an Operating System and resident mass storage. An important part of the BMC paradigm is the Bare Machine Internet. It utilizes an Application Development model software that interfaces directly with the hardware on a network server and file server. Because it is “bare,” it is a powerful teaching and research tool that can readily display the internals of the network protocols, software, and hardware of the applications running on the Bare Server. It was also demonstrated that the bare server was accessible by laptop and by smartphone/android. The purpose was to show the further practicality of Bare Internet in Computer Engineering and Computer Science Education and Research. It was also to show that an undergraduate student could take advantage of a bare server with any device and any browser at any release version connected to the internet. This paper presents the Bare Web Server as an educational tool. We will discuss possible applications of this paradigm.

Keywords: bare machine computing, online research, network technology, visualizing network internals

Procedia PDF Downloads 172
2538 New Two-Way Map-Reduce Join Algorithm: Hash Semi Join

Authors: Marwa Hussein Mohamed, Mohamed Helmy Khafagy, Samah Ahmed Senbel

Abstract:

Map Reduce is a programming model used to handle and support massive data sets. Rapidly increasing in data size and big data are the most important issue today to make an analysis of this data. map reduce is used to analyze data and get more helpful information by using two simple functions map and reduce it's only written by the programmer, and it includes load balancing , fault tolerance and high scalability. The most important operation in data analysis are join, but map reduce is not directly support join. This paper explains two-way map-reduce join algorithm, semi-join and per split semi-join, and proposes new algorithm hash semi-join that used hash table to increase performance by eliminating unused records as early as possible and apply join using hash table rather than using map function to match join key with other data table in the second phase but using hash tables isn't affecting on memory size because we only save matched records from the second table only. Our experimental result shows that using a hash table with hash semi-join algorithm has higher performance than two other algorithms while increasing the data size from 10 million records to 500 million and running time are increased according to the size of joined records between two tables.

Keywords: map reduce, hadoop, semi join, two way join

Procedia PDF Downloads 513
2537 Number of Parameters of Anantharam's Model with Single-Input Single-Output Case

Authors: Kazuyoshi Mori

Abstract:

In this paper, we consider the parametrization of Anantharam’s model within the framework of the factorization approach. In the parametrization, we investigate the number of required parameters of Anantharam’s model. We consider single-input single-output systems in this paper. By the investigation, we find three cases that are (1) there exist plants which require only one parameter and (2) two parameters, and (3) the number of parameters is at most three.

Keywords: linear systems, parametrization, coprime factorization, number of parameters

Procedia PDF Downloads 214
2536 Heinz-Type Inequalities in Hilbert Spaces

Authors: Jin Liang, Guanghua Shi

Abstract:

In this paper, we are concerned with the further refinements of the Heinz operator inequalities in Hilbert spaces. Our purpose is to derive several new Heinz-type operator inequalities. First, with the help of the Taylor series of some hyperbolic functions, we obtain some refinements of the ordering relations among Heinz means defined by Bhatia with different parameters, which would be more suitable in obtaining the corresponding operator inequalities. Second, we present some generalizations of Heinz operator inequalities. Finally, we give a matrix version of the Heinz inequality for the Hilbert-Schmidt norm.

Keywords: Hilbert space, means inequality, norm inequality, positive linear operator

Procedia PDF Downloads 271
2535 Using Statistical Significance and Prediction to Test Long/Short Term Public Services and Patients' Cohorts: A Case Study in Scotland

Authors: Raptis Sotirios

Abstract:

Health and social care (HSc) services planning and scheduling are facing unprecedented challenges due to the pandemic pressure and also suffer from unplanned spending that is negatively impacted by the global financial crisis. Data-driven can help to improve policies, plan and design services provision schedules using algorithms assist healthcare managers’ to face unexpected demands using fewer resources. The paper discusses services packing using statistical significance tests and machine learning (ML) to evaluate demands similarity and coupling. This is achieved by predicting the range of the demand (class) using ML methods such as CART, random forests (RF), and logistic regression (LGR). The significance tests Chi-Squared test and Student test are used on data over a 39 years span for which HSc services data exist for services delivered in Scotland. The demands are probabilistically associated through statistical hypotheses that assume that the target service’s demands are statistically dependent on other demands as a NULL hypothesis. This linkage can be confirmed or not by the data. Complementarily, ML methods are used to linearly predict the above target demands from the statistically found associations and extend the linear dependence of the target’s demand to independent demands forming, thus groups of services. Statistical tests confirm ML couplings making the prediction also statistically meaningful and prove that a target service can be matched reliably to other services, and ML shows these indicated relationships can also be linear ones. Zero paddings were used for missing years records and illustrated better such relationships both for limited years and in the entire span offering long term data visualizations while limited years groups explained how well patients numbers can be related in short periods or can change over time as opposed to behaviors across more years. The prediction performance of the associations is measured using Receiver Operating Characteristic(ROC) AUC and ACC metrics as well as the statistical tests, Chi-Squared and Student. Co-plots and comparison tables for RF, CART, and LGR as well as p-values and Information Exchange(IE), are provided showing the specific behavior of the ML and of the statistical tests and the behavior using different learning ratios. The impact of k-NN and cross-correlation and C-Means first groupings is also studied over limited years and the entire span. It was found that CART was generally behind RF and LGR, but in some interesting cases, LGR reached an AUC=0 falling below CART, while the ACC was as high as 0.912, showing that ML methods can be confused padding or by data irregularities or outliers. On average, 3 linear predictors were sufficient, LGR was found competing RF well, and CART followed with the same performance at higher learning ratios. Services were packed only if when significance level(p-value) of their association coefficient was more than 0.05. Social factors relationships were observed between home care services and treatment of old people, birth weights, alcoholism, drug abuse, and emergency admissions. The work found that different HSc services can be well packed as plans of limited years, across various services sectors, learning configurations, as confirmed using statistical hypotheses.

Keywords: class, cohorts, data frames, grouping, prediction, prob-ability, services

Procedia PDF Downloads 234
2534 Acceleration of Lagrangian and Eulerian Flow Solvers via Graphics Processing Units

Authors: Pooya Niksiar, Ali Ashrafizadeh, Mehrzad Shams, Amir Hossein Madani

Abstract:

There are many computationally demanding applications in science and engineering which need efficient algorithms implemented on high performance computers. Recently, Graphics Processing Units (GPUs) have drawn much attention as compared to the traditional CPU-based hardware and have opened up new improvement venues in scientific computing. One particular application area is Computational Fluid Dynamics (CFD), in which mature CPU-based codes need to be converted to GPU-based algorithms to take advantage of this new technology. In this paper, numerical solutions of two classes of discrete fluid flow models via both CPU and GPU are discussed and compared. Test problems include an Eulerian model of a two-dimensional incompressible laminar flow case and a Lagrangian model of a two phase flow field. The CUDA programming standard is used to employ an NVIDIA GPU with 480 cores and a C++ serial code is run on a single core Intel quad-core CPU. Up to two orders of magnitude speed up is observed on GPU for a certain range of grid resolution or particle numbers. As expected, Lagrangian formulation is better suited for parallel computations on GPU although Eulerian formulation represents significant speed up too.

Keywords: CFD, Eulerian formulation, graphics processing units, Lagrangian formulation

Procedia PDF Downloads 417
2533 Analytical Investigation of Modeling and Simulation of Different Combinations of Sinusoidal Supplied Autotransformer under Linear Loading Conditions

Authors: M. Salih Taci, N. Tayebi, I. Bozkır

Abstract:

This paper investigates the operation of a sinusoidal supplied autotransformer on the different states of magnetic polarity of primary and secondary terminals for four different step-up and step-down analytical conditions. In this paper, a new analytical modeling and equations for dot-marked and polarity-based step-up and step-down autotransformer are presented. These models are validated by the simulation of current and voltage waveforms for each state. PSpice environment was used for simulation.

Keywords: autotransformer modeling, autotransformer simulation, step-up autotransformer, step-down autotransformer, polarity

Procedia PDF Downloads 319
2532 Skills Needed Amongst Secondary School Students for Artificial Intelligence Development in Southeast Nigeria

Authors: Chukwuma Mgboji

Abstract:

Since the advent of Artificial Intelligence, robots have become a major stay in developing societies. Robots are deployed in Education, Health, Food and in other spheres of life. Nigeria a country in West Africa has a very low profile in the advancement of Artificial Intelligence especially in the grass roots. The benefits of Artificial intelligence are not fully maximised and harnessed. Advances in artificial intelligence are perceived as impossible or observed as irrelevant. This study seeks to ascertain the needed skills for the development of artificialintelligence amongst secondary schools in Nigeria. The study focused on South East Nigeria with Five states namely Imo, Abia, Ebonyi, Anambra and Enugu. The sample size is 1000 students drawn from Five Government owned Universities offering Computer Science, Computer Education, Electronics Engineering across the Five South East states. Survey method was used to solicit responses from respondents. The findings from the study identified mathematical skills, analytical skills, problem solving skills, computing skills, programming skills, algorithm skills amongst others. The result of this study to the best of the author’s knowledge will be highly beneficial to all stakeholders involved in the advancements and development of artificial intelligence.

Keywords: artificial intelligence, secondary school, robotics, skills

Procedia PDF Downloads 155
2531 Identification and Classification of Fiber-Fortified Semolina by Near-Infrared Spectroscopy (NIR)

Authors: Amanda T. Badaró, Douglas F. Barbin, Sofia T. Garcia, Maria Teresa P. S. Clerici, Amanda R. Ferreira

Abstract:

Food fortification is the intentional addition of a nutrient in a food matrix and has been widely used to overcome the lack of nutrients in the diet or increasing the nutritional value of food. Fortified food must meet the demand of the population, taking into account their habits and risks that these foods may cause. Wheat and its by-products, such as semolina, has been strongly indicated to be used as a food vehicle since it is widely consumed and used in the production of other foods. These products have been strategically used to add some nutrients, such as fibers. Methods of analysis and quantification of these kinds of components are destructive and require lengthy sample preparation and analysis. Therefore, the industry has searched for faster and less invasive methods, such as Near-Infrared Spectroscopy (NIR). NIR is a rapid and cost-effective method, however, it is based on indirect measurements, yielding high amount of data. Therefore, NIR spectroscopy requires calibration with mathematical and statistical tools (Chemometrics) to extract analytical information from the corresponding spectra, as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). PCA is well suited for NIR, once it can handle many spectra at a time and be used for non-supervised classification. Advantages of the PCA, which is also a data reduction technique, is that it reduces the data spectra to a smaller number of latent variables for further interpretation. On the other hand, LDA is a supervised method that searches the Canonical Variables (CV) with the maximum separation among different categories. In LDA, the first CV is the direction of maximum ratio between inter and intra-class variances. The present work used a portable infrared spectrometer (NIR) for identification and classification of pure and fiber-fortified semolina samples. The fiber was added to semolina in two different concentrations, and after the spectra acquisition, the data was used for PCA and LDA to identify and discriminate the samples. The results showed that NIR spectroscopy associate to PCA was very effective in identifying pure and fiber-fortified semolina. Additionally, the classification range of the samples using LDA was between 78.3% and 95% for calibration and 75% and 95% for cross-validation. Thus, after the multivariate analysis such as PCA and LDA, it was possible to verify that NIR associated to chemometric methods is able to identify and classify the different samples in a fast and non-destructive way.

Keywords: Chemometrics, fiber, linear discriminant analysis, near-infrared spectroscopy, principal component analysis, semolina

Procedia PDF Downloads 212
2530 Quantified Metabolomics for the Determination of Phenotypes and Biomarkers across Species in Health and Disease

Authors: Miroslava Cuperlovic-Culf, Lipu Wang, Ketty Boyle, Nadine Makley, Ian Burton, Anissa Belkaid, Mohamed Touaibia, Marc E. Surrette

Abstract:

Metabolic changes are one of the major factors in the development of a variety of diseases in various species. Metabolism of agricultural plants is altered the following infection with pathogens sometimes contributing to resistance. At the same time, pathogens use metabolites for infection and progression. In humans, metabolism is a hallmark of cancer development for example. Quantified metabolomics data combined with other omics or clinical data and analyzed using various unsupervised and supervised methods can lead to better diagnosis and prognosis. It can also provide information about resistance as well as contribute knowledge of compounds significant for disease progression or prevention. In this work, different methods for metabolomics quantification and analysis from Nuclear Magnetic Resonance (NMR) measurements that are used for investigation of disease development in wheat and human cells will be presented. One-dimensional 1H NMR spectra are used extensively for metabolic profiling due to their high reliability, wide range of applicability, speed, trivial sample preparation and low cost. This presentation will describe a new method for metabolite quantification from NMR data that combines alignment of spectra of standards to sample spectra followed by multivariate linear regression optimization of spectra of assigned metabolites to samples’ spectra. Several different alignment methods were tested and multivariate linear regression result has been compared with other quantification methods. Quantified metabolomics data can be analyzed in the variety of ways and we will present different clustering methods used for phenotype determination, network analysis providing knowledge about the relationships between metabolites through metabolic network as well as biomarker selection providing novel markers. These analysis methods have been utilized for the investigation of fusarium head blight resistance in wheat cultivars as well as analysis of the effect of estrogen receptor and carbonic anhydrase activation and inhibition on breast cancer cell metabolism. Metabolic changes in spikelet’s of wheat cultivars FL62R1, Stettler, MuchMore and Sumai3 following fusarium graminearum infection were explored. Extensive 1D 1H and 2D NMR measurements provided information for detailed metabolite assignment and quantification leading to possible metabolic markers discriminating resistance level in wheat subtypes. Quantification data is compared to results obtained using other published methods. Fusarium infection induced metabolic changes in different wheat varieties are discussed in the context of metabolic network and resistance. Quantitative metabolomics has been used for the investigation of the effect of targeted enzyme inhibition in cancer. In this work, the effect of 17 β -estradiol and ferulic acid on metabolism of ER+ breast cancer cells has been compared to their effect on ER- control cells. The effect of the inhibitors of carbonic anhydrase on the observed metabolic changes resulting from ER activation has also been determined. Metabolic profiles were studied using 1D and 2D metabolomic NMR experiments, combined with the identification and quantification of metabolites, and the annotation of the results is provided in the context of biochemical pathways.

Keywords: metabolic biomarkers, metabolic network, metabolomics, multivariate linear regression, NMR quantification, quantified metabolomics, spectral alignment

Procedia PDF Downloads 338
2529 Calculation of the Thermal Stresses in an Elastoplastic Plate Heated by Local Heat Source

Authors: M. Khaing, A. V. Tkacheva

Abstract:

The work is devoted to solving the problem of temperature stresses, caused by the heating point of the round plate. The plate is made of elastoplastic material, so the Prandtl-Reis model is used. A piecewise-linear condition of the Ishlinsky-Ivlev flow is taken as the loading surface, in which the yield stress depends on the temperature. Piecewise-linear conditions (Treska or Ishlinsky-Ivlev), in contrast to the Mises condition, make it possible to obtain solutions of the equilibrium equation in an analytical form. In the problem under consideration, using the conditions of Tresca, it is impossible to obtain a solution. This is due to the fact that the equation of equilibrium ceases to be satisfied when the two Tresca conditions are fulfilled at once. Using the conditions of plastic flow Ishlinsky-Ivlev allows one to solve the problem. At the same time, there are also no solutions on the edge of the Ishlinsky-Ivlev hexagon in the plane-stressed state. Therefore, the authors of the article propose to jump from the edge to the edge of the mine edge, which gives an opportunity to obtain an analytical solution. At the same time, there is also no solution on the edge of the Ishlinsky-Ivlev hexagon in a plane stressed state; therefore, in this paper, the authors of the article propose to jump from the side to the side of the mine edge, which gives an opportunity to receive an analytical solution. The paper compares solutions of the problem of plate thermal deformation. One of the solutions was obtained under the condition that the elastic moduli (Young's modulus, Poisson's ratio) which depend on temperature. The yield point is assumed to be parabolically temperature dependent. The main results of the comparisons are that the region of irreversible deformation is larger in the calculations obtained for solving the problem with constant elastic moduli. There is no repeated plastic flow in the solution of the problem with elastic moduli depending on temperature. The absolute value of the irreversible deformations is higher for the solution of the problem in which the elastic moduli are constant; there are also insignificant differences in the distribution of the residual stresses.

Keywords: temperature stresses, elasticity, plasticity, Ishlinsky-Ivlev condition, plate, annular heating, elastic moduli

Procedia PDF Downloads 142
2528 Fatigue Analysis of Spread Mooring Line

Authors: Chanhoe Kang, Changhyun Lee, Seock-Hee Jun, Yeong-Tae Oh

Abstract:

Offshore floating structure under the various environmental conditions maintains a fixed position by mooring system. Environmental conditions, vessel motions and mooring loads are applied to mooring lines as the dynamic tension. Because global responses of mooring system in deep water are specified as wave frequency and low frequency response, they should be calculated from the time-domain analysis due to non-linear dynamic characteristics. To take into account all mooring loads, environmental conditions, added mass and damping terms at each time step, a lot of computation time and capacities are required. Thus, under the premise that reliable fatigue damage could be derived through reasonable analysis method, it is necessary to reduce the analysis cases through the sensitivity studies and appropriate assumptions. In this paper, effects in fatigue are studied for spread mooring system connected with oil FPSO which is positioned in deep water of West Africa offshore. The target FPSO with two Mbbls storage has 16 spread mooring lines (4 bundles x 4 lines). The various sensitivity studies are performed for environmental loads, type of responses, vessel offsets, mooring position, loading conditions and riser behavior. Each parameter applied to the sensitivity studies is investigated from the effects of fatigue damage through fatigue analysis. Based on the sensitivity studies, the following results are presented: Wave loads are more dominant in terms of fatigue than other environment conditions. Wave frequency response causes the higher fatigue damage than low frequency response. The larger vessel offset increases the mean tension and so it results in the increased fatigue damage. The external line of each bundle shows the highest fatigue damage by the governed vessel pitch motion due to swell wave conditions. Among three kinds of loading conditions, ballast condition has the highest fatigue damage due to higher tension. The riser damping occurred by riser behavior tends to reduce the fatigue damage. The various analysis results obtained from these sensitivity studies can be used for a simplified fatigue analysis of spread mooring line as the reference.

Keywords: mooring system, fatigue analysis, time domain, non-linear dynamic characteristics

Procedia PDF Downloads 334
2527 Density functional (DFT), Study of the Structural and Phase Transition of ThC and ThN: LDA vs GGA Computational

Authors: Hamza Rekab Djabri, Salah Daoud

Abstract:

The present paper deals with the computational of structural and electronic properties of ThC and ThN compounds using density functional theory within generalized-gradient (GGA) apraximation and local density approximation (LDA). We employ the full potential linear muffin-tin orbitals (FP-LMTO) as implemented in the Lmtart code. We have used to examine structure parameter in eight different structures such as in NaCl (B1), CsCl (B2), ZB (B3), NiAs (B8), PbO (B10), Wurtzite (B4) , HCP (A3) βSn (A5) structures . The equilibrium lattice parameter, bulk modulus, and its pressure derivative were presented for all calculated phases. The calculated ground state properties are in good agreement with available experimental and theoretical results.

Keywords: DFT, GGA, LDA, properties structurales, ThC, ThN

Procedia PDF Downloads 98
2526 BIASS in the Estimation of Covariance Matrices and Optimality Criteria

Authors: Juan M. Rodriguez-Diaz

Abstract:

The precision of parameter estimators in the Gaussian linear model is traditionally accounted by the variance-covariance matrix of the asymptotic distribution. However, this measure can underestimate the true variance, specially for small samples. Traditionally, optimal design theory pays attention to this variance through its relationship with the model's information matrix. For this reason it seems convenient, at least in some cases, adapt the optimality criteria in order to get the best designs for the actual variance structure, otherwise the loss in efficiency of the designs obtained with the traditional approach may be very important.

Keywords: correlated observations, information matrix, optimality criteria, variance-covariance matrix

Procedia PDF Downloads 443