Search results for: psychomotor performance
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12856

Search results for: psychomotor performance

8446 Evaluation of the Energy Performance and Emissions of an Aircraft Engine: J69 Using Fuel Blends of Jet A1 and Biodiesel

Authors: Gabriel Fernando Talero Rojas, Vladimir Silva Leal, Camilo Bayona-Roa, Juan Pava, Mauricio Lopez Gomez

Abstract:

The substitution of conventional aviation fuels with biomass-derived alternative fuels is an emerging field of study in the aviation transport, mainly due to its energy consumption, the contribution to the global Greenhouse Gas - GHG emissions and the fossil fuel price fluctuations. Nevertheless, several challenges remain as the biofuel production cost and its degradative effect over the fuel systems that alter the operating safety. Moreover, experimentation on full-scale aeronautic turbines are expensive and complex, leading to most of the research to the testing of small-size turbojets with a major absence of information regarding the effects in the energy performance and the emissions. The main purpose of the current study is to present the results of experimentation in a full-scale military turbojet engine J69-T-25A (presented in Fig. 1) with 640 kW of power rating and using blends of Jet A1 with oil palm biodiesel. The main findings are related to the thrust specific fuel consumption – TSFC, the engine global efficiency – η, the air/fuel ratio – AFR and the volume fractions of O2, CO2, CO, and HC. Two fuels are used in the present study: a commercial Jet A1 and a Colombian palm oil biodiesel. The experimental plan is conducted using the biodiesel volume contents - w_BD from 0 % (B0) to 50 % (B50). The engine operating regimes are set to Idle, Cruise, and Take-off conditions. The turbojet engine J69 is used by the Colombian Air Force and it is installed in a testing bench with the instrumentation that corresponds to the technical manual of the engine. The increment of w_BD from 0 % to 50 % reduces the η near 3,3 % and the thrust force in a 26,6 % at Idle regime. These variations are related to the reduction of the 〖HHV〗_ad of the fuel blend. The evolved CO and HC tend to be reduced in all the operating conditions when increasing w_BD. Furthermore, a reduction of the atomization angle is presented in Fig. 2, indicating a poor atomization in the fuel nozzle injectors when using a higher biodiesel content as the viscosity of fuel blend increases. An evolution of cloudiness is also observed during the shutdown procedure as presented in Fig. 3a, particularly after 20 % of biodiesel content in the fuel blend. This promotes the contamination of some components of the combustion chamber of the J69 engine with soot and unburned matter (Fig. 3). Thus, the substitution of biodiesel content above 20 % is not recommended in order to avoid a significant decrease of η and the thrust force. A more detail examination of the mechanical wearing of the main components of the engine is advised in further studies.

Keywords: aviation, air to fuel ratio, biodiesel, energy performance, fuel atomization, gas turbine

Procedia PDF Downloads 109
8445 Design and Analysis of a Piezoelectric Linear Motor Based on Rigid Clamping

Authors: Chao Yi, Cunyue Lu, Lingwei Quan

Abstract:

Piezoelectric linear motors have the characteristics of great electromagnetic compatibility, high positioning accuracy, compact structure and no deceleration mechanism, which make it promising to applicate in micro-miniature precision drive systems. However, most piezoelectric motors are employed by flexible clamping, which has insufficient rigidity and is difficult to use in rapid positioning. Another problem is that this clamping method seriously affects the vibration efficiency of the vibrating unit. In order to solve these problems, this paper proposes a piezoelectric stack linear motor based on double-end rigid clamping. First, a piezoelectric linear motor with a length of only 35.5 mm is designed. This motor is mainly composed of a motor stator, a driving foot, a ceramic friction strip, a linear guide, a pre-tightening mechanism and a base. This structure is much simpler and smaller than most similar motors, and it is easy to assemble as well as to realize precise control. In addition, the properties of piezoelectric stack are reviewed and in order to obtain the elliptic motion trajectory of the driving head, a driving scheme of the longitudinal-shear composite stack is innovatively proposed. Finally, impedance analysis and speed performance testing were performed on the piezoelectric linear motor prototype. The motor can measure speed up to 25.5 mm/s under the excitation of signal voltage of 120 V and frequency of 390 Hz. The result shows that the proposed piezoelectric stacked linear motor obtains great performance. It can run smoothly in a large speed range, which is suitable for various precision control in medical images, aerospace, precision machinery and many other fields.

Keywords: piezoelectric stack, linear motor, rigid clamping, elliptical trajectory

Procedia PDF Downloads 153
8444 Deprivation of Visual Information Affects Differently the Gait Cycle in Children with Different Level of Motor Competence

Authors: Miriam Palomo-Nieto, Adrian Agricola, Rudolf Psotta, Reza Abdollahipour, Ludvik Valtr

Abstract:

The importance of vision and the visual control of movement have been labeled in the literature related to motor control and many studies have demonstrated that children with low motor competence may rely more heavily on vision to perform movements than their typically developing peers. The aim of the study was to highlight the effects of different visual conditions on motor performance during walking in children with different levels of motor coordination. Participants (n = 32, mean age = 8.5 years sd. ± 0.5) were divided into two groups: typical development (TD) and low motor coordination (LMC) based on the scores of the Movement Assessment Battery for Children (MABC-2). They were asked to walk along a 10 meters walkway where the Optojump-Next instrument was installed in a portable laboratory (15 x 3 m), which allows that all participants had the same visual information. They walked in self-selected speed under four visual conditions: full vision (FV), limited vision 100 ms (LV-100), limited vision 150 ms (LV-150) and non-vision (NV). For visual occlusion participants were equipped with Plato Goggles that shut for 100 and 150 ms, respectively, within each 2 sec. Data were analyzed in a two-way mixed-effect ANOVA including 2 (TD vs. LMC) x 4 (FV, LV-100, LV-150 & NV) with repeated-measures on the last factor (p ≤.05). Results indicated that TD children walked faster and with longer normalized steps length and strides than LMC children. For TD children the percentage of the single support and swing time were higher than for low motor competence children. However, the percentage of load response and pre swing was higher in the low motor competence children rather than the TD children. These findings indicated that through walking we could be able to identify different levels of motor coordination in children. Likewise, LMC children showed shorter percentages in those parameters regarding only one leg support, supporting the idea of balance problems.

Keywords: visual information, motor performance, walking pattern, optojump

Procedia PDF Downloads 574
8443 Purification of Eicosapentaenoic Acid (EPA) and Docosahexaenoic Acid (DHA) from Fish Oil Using HPLC Method and Investigation of Their Antibacterial Effects on Some Pathogenic Bacteria

Authors: Yılmaz Uçar, Fatih Ozogul, Mustafa Durmuş, Yesim Ozogul, Ali Rıza Köşker, Esmeray Kuley Boğa, Deniz Ayas

Abstract:

The aim of this study was to purified eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), that are essential oils from trout oil, using high-performance liquid chromatography (HPLC) method, bioconverted EPA and DHA into bioconverted EPA (bEPA), bioconverted DHA (bDHA) extracts by P. aeruginosa PR3. Moreover, in vitro antibacterial activity of bEPA and bDHA was investigated using disc diffusion methods and minimum inhibitory concentration (MIC). EPA and DHA concentration of 11.1% and 15.9% in trout oil increased in 58.64% and 40.33% after HPLC optimisation, respectively. In this study, EPA and DHA enriched products were obtained which are to be used as valuable supplements for food and pharmaceutical purposes. The bioconverted EPA and DHA exhibited antibacterial activities against two Gram-positive bacteria (Listeria monocytogenes ATCC 7677 and Staphylococcus aureus ATCC 29213) and six Gram-negative bacteria (Pseudomonas aeruginosa ATCC 27853, Escherichia coli ATCC 25922, Klebsiella pneumoniae ATCC700603, Enterococcus faecalis ATCC 29212, Aeromonas hydrophila NCIMB 1135, and Salmonella Paratyphi A NCTC 13). Inhibition zones and MIC value of bEPA and bDHA against bacterial strains ranged from 7 to 12 mm and from 350 to 2350 μg/mL, respectively. Our results suggested that the crude extracts of bioconversion of EPA and DHA by P. aeruginosa PR3 can be considered as promising antimicrobials in improving food safety by controlling foodborne pathogens.

Keywords: High-Performance Liquid Chromatography (HPLC), docosahexaenoic acid, DHA, eicosapentaenoic acid, EPA, minimum inhibitory concentration, MIC, Pseudomonas aeruginosa PR3

Procedia PDF Downloads 498
8442 On-Chip Ku-Band Bandpass Filter with Compact Size and Wide Stopband

Authors: Jyh Sheen, Yang-Hung Cheng

Abstract:

This paper presents a design of a microstrip bandpass filter with a compact size and wide stopband by using 0.15-μm GaAs pHEMT process. The wide stop band is achieved by suppressing the first and second harmonic resonance frequencies. The slow-wave coupling stepped impedance resonator with cross coupled structure is adopted to design the bandpass filter. A two-resonator filter was fabricated with 13.5GHz center frequency and 11% bandwidth was achieved. The devices are simulated using the ADS design software. This device has shown a compact size and very low insertion loss of 2.6 dB. Microstrip planar bandpass filters have been widely adopted in various communication applications due to the attractive features of compact size and ease of fabricating. Various planar resonator structures have been suggested. In order to reach a wide stopband to reduce the interference outside the passing band, various designs of planar resonators have also been submitted to suppress the higher order harmonic frequencies of the designed center frequency. Various modifications to the traditional hairpin structure have been introduced to reduce large design area of hairpin designs. The stepped-impedance, slow-wave open-loop, and cross-coupled resonator structures have been studied to miniaturize the hairpin resonators. In this study, to suppress the spurious harmonic bands and further reduce the filter size, a modified hairpin-line bandpass filter with cross coupled structure is suggested by introducing the stepped impedance resonator design as well as the slow-wave open-loop resonator structure. In this way, very compact circuit size as well as very wide upper stopband can be achieved and realized in a Roger 4003C substrate. On the other hand, filters constructed with integrated circuit technology become more attractive for enabling the integration of the microwave system on a single chip (SOC). To examine the performance of this design structure at the integrated circuit, the filter is fabricated by the 0.15 μm pHEMT GaAs integrated circuit process. This pHEMT process can also provide a much better circuit performance for high frequency designs than those made on a PCB board. The design example was implemented in GaAs with center frequency at 13.5 GHz to examine the performance in higher frequency in detail. The occupied area is only about 1.09×0.97 mm2. The ADS software is used to design those modified filters to suppress the first and second harmonics.

Keywords: microstrip resonator, bandpass filter, harmonic suppression, GaAs

Procedia PDF Downloads 326
8441 The Role of Privatization on the Formulation of Productive Supply Chain: The Case of Ethiopian Firms

Authors: Merhawit Fisseha Gebremariam, Yohannes Yebabe Tesfay

Abstract:

This study focuses on the formulation of a sustainable, effective, and efficient supply chain strategy framework that will enable Ethiopian privatized firms. The study examined the role of privatization in productive sourcing, production, and delivery to Ethiopian firm’s performances. To analyze our hypothesis, the authors applied the concepts of Key Performance Indicator (KPI), strategic outsourcing, purchasing portfolio analysis, and Porter's marketing analysis. The authors selected ten privatized companies and compared their financial, market expansion, and sustainability performances. The Chi-Square Test showed that at the 5% level of significance, privatization and outsourcing activities can assist the business performances of Ethiopian firms in terms of product promotion and new market expansion. At the 5% level of significance, the independent t-test result showed that firms that were privatized by Ethiopian investors showed stronger financial performance than those that were privatized by foreign investors. Furthermore, it is better if Ethiopian firms apply both cost leadership and differentiated strategy to enhance thriving in their business area. Ethiopian firms need to implement the supply chain operations reference (SCOR) model for an exclusive framework that supports communication links the supply chain partners, and enhances productivity. The government of Ethiopia should be aware that the privatization of firms by Ethiopian investors will strengthen the economy. Otherwise, the privatization process will be risky for the country, and therefore, the government of Ethiopia should stop doing those activities.

Keywords: correlation analysis, market strategies, KPIs, privatization, risk and Ethiopia

Procedia PDF Downloads 68
8440 A Survey of Skin Cancer Detection and Classification from Skin Lesion Images Using Deep Learning

Authors: Joseph George, Anne Kotteswara Roa

Abstract:

Skin disease is one of the most common and popular kinds of health issues faced by people nowadays. Skin cancer (SC) is one among them, and its detection relies on the skin biopsy outputs and the expertise of the doctors, but it consumes more time and some inaccurate results. At the early stage, skin cancer detection is a challenging task, and it easily spreads to the whole body and leads to an increase in the mortality rate. Skin cancer is curable when it is detected at an early stage. In order to classify correct and accurate skin cancer, the critical task is skin cancer identification and classification, and it is more based on the cancer disease features such as shape, size, color, symmetry and etc. More similar characteristics are present in many skin diseases; hence it makes it a challenging issue to select important features from a skin cancer dataset images. Hence, the skin cancer diagnostic accuracy is improved by requiring an automated skin cancer detection and classification framework; thereby, the human expert’s scarcity is handled. Recently, the deep learning techniques like Convolutional neural network (CNN), Deep belief neural network (DBN), Artificial neural network (ANN), Recurrent neural network (RNN), and Long and short term memory (LSTM) have been widely used for the identification and classification of skin cancers. This survey reviews different DL techniques for skin cancer identification and classification. The performance metrics such as precision, recall, accuracy, sensitivity, specificity, and F-measures are used to evaluate the effectiveness of SC identification using DL techniques. By using these DL techniques, the classification accuracy increases along with the mitigation of computational complexities and time consumption.

Keywords: skin cancer, deep learning, performance measures, accuracy, datasets

Procedia PDF Downloads 129
8439 Emerging Dimensions of Intrinsic Motivation for Effective Performance

Authors: Prachi Bhatt

Abstract:

Motivated workforce is an important asset of an organisation. Intrinsic motivation is one of the key aspects of people operations and performance. Researches have emphasized the significance of internal factors in individuals’ motivation. In the changing business scenario, it is a challenge for the organizations’ leaders to inspire and motivate their workforce. The present study deals with the intrinsic motivation potential of an individual which govern the innate capability of an individual driving him or her to behave or perform in the changing work environment, tasks, teams. Differences at individual level significantly influence differences in levels of motivation. In the above context, the present research attempts to explore behavioral trait dimensions which influence motivational potential of an individual. The present research emphasizes the significance of intrinsic motivational potential and the significance of exploring the differences in the intrinsic motivational potential levels of individuals at work places. Thus, this paper empirically tests the framework of behavioral traits which affects motivational potential of an individual. With the help of two studies i.e., Study 1 and Study 2, exploratory factor analysis and confirmatory factor analysis, respectively, indicated a reliable measure assessing intrinsic motivational potential of an individual. Given the variety of challenges of motivating contemporary workforce, and with increasing importance of intrinsic motivation, the paper discusses the relevance of the findings and of the measure assessing intrinsic motivational potential. Assessment of such behavioral traits would assist in the effective realization of intrinsic motivational potential of individuals. Additionally, the paper discusses the practical implications and furnishes scope for future research.

Keywords: behavioral traits, individual differences, intrinsic motivational potential, intrinsic motivation, motivation, workplace motivation

Procedia PDF Downloads 196
8438 Fully Automated Methods for the Detection and Segmentation of Mitochondria in Microscopy Images

Authors: Blessing Ojeme, Frederick Quinn, Russell Karls, Shannon Quinn

Abstract:

The detection and segmentation of mitochondria from fluorescence microscopy are crucial for understanding the complex structure of the nervous system. However, the constant fission and fusion of mitochondria and image distortion in the background make the task of detection and segmentation challenging. In the literature, a number of open-source software tools and artificial intelligence (AI) methods have been described for analyzing mitochondrial images, achieving remarkable classification and quantitation results. However, the availability of combined expertise in the medical field and AI required to utilize these tools poses a challenge to its full adoption and use in clinical settings. Motivated by the advantages of automated methods in terms of good performance, minimum detection time, ease of implementation, and cross-platform compatibility, this study proposes a fully automated framework for the detection and segmentation of mitochondria using both image shape information and descriptive statistics. Using the low-cost, open-source python and openCV library, the algorithms are implemented in three stages: pre-processing, image binarization, and coarse-to-fine segmentation. The proposed model is validated using the mitochondrial fluorescence dataset. Ground truth labels generated using a Lab kit were also used to evaluate the performance of our detection and segmentation model. The study produces good detection and segmentation results and reports the challenges encountered during the image analysis of mitochondrial morphology from the fluorescence mitochondrial dataset. A discussion on the methods and future perspectives of fully automated frameworks conclude the paper.

Keywords: 2D, binarization, CLAHE, detection, fluorescence microscopy, mitochondria, segmentation

Procedia PDF Downloads 357
8437 Study of the Effect of the Contra-Rotating Component on the Performance of the Centrifugal Compressor

Authors: Van Thang Nguyen, Amelie Danlos, Richard Paridaens, Farid Bakir

Abstract:

This article presents a study of the effect of a contra-rotating component on the efficiency of centrifugal compressors. A contra-rotating centrifugal compressor (CRCC) is constructed using two independent rotors, rotating in the opposite direction and replacing the single rotor of a conventional centrifugal compressor (REF). To respect the geometrical parameters of the REF one, two rotors of the CRCC are designed, based on a single rotor geometry, using the hub and shroud length ratio parameter of the meridional contour. Firstly, the first rotor is designed by choosing a value of length ratio. Then, the second rotor is calculated to be adapted to the fluid flow of the first rotor according aerodynamics principles. In this study, four values of length ratios 0.3, 0.4, 0.5, and 0.6 are used to create four configurations CF1, CF2, CF3, and CF4 respectively. For comparison purpose, the circumferential velocity at the outlet of the REF and the CRCC are preserved, which means that the single rotor of the REF and the second rotor of the CRCC rotate with the same speed of 16000rpm. The speed of the first rotor in this case is chosen to be equal to the speed of the second rotor. The CFD simulation is conducted to compare the performance of the CRCC and the REF with the same boundary conditions. The results show that the configuration with a higher length ratio gives higher pressure rise. However, its efficiency is lower. An investigation over the entire operating range shows that the CF1 is the best configuration in this case. In addition, the CRCC can improve the pressure rise as well as the efficiency by changing the speed of each rotor independently. The results of changing the first rotor speed show with a 130% speed increase, the pressure ratio rises of 8.7% while the efficiency remains stable at the flow rate of the design operating point.

Keywords: centrifugal compressor, contra-rotating, interaction rotor, vacuum

Procedia PDF Downloads 134
8436 Assessment on the Conduct of Arnis Competition in Pasuc National Olympics 2015: Basis for Improvement of Rules in Competition

Authors: Paulo O. Motita

Abstract:

The Philippine Association of State Colleges and University (PASUC) is an association of State owned and operated higher learning institutions in the Philippines, it is the association that spearhead the conduct of the Annual National Athletic competitions for State Colleges and Universities and Arnis is one of the regular sports. In 2009, Republic Act 9850 also known as declared Arnis as the National Sports and Martial arts of the Philippines. Arnis an ancient Filipino Martial Arts is the major sports in the Annual Palarong Pambansa and other school based sports events. The researcher as a Filipino Martial Arts master and a former athlete desired to determine the extent of acceptability of the arnis rules in competition which serves as the basis for the development of arnis rules. The study aimed to assess the conduct of Arnis competition in PASUC Olympics 2015 in Tugegarao City, Cagayan, Philippines.the rules and conduct itself as perceived by Officiating officials, Coaches and Athletes during the competition last February 7-15, 2015. The descriptive method of research was used, the survey questionnaire as the data gathering instrument was validated. The respondents were composed of 12 Officiating officials, 19 coaches and 138 athletes representing the different regions. Their responses were treated using the Mean, Percentage and One-way Analysis of Variance. The study revealed that the conduct of Arnis competition in PASUC Olympics 2015 was at the low extent to moderate extent as perceived by the three groups of respondents in terms of officiating, scoring and giving violations. Furthermore there is no significant difference in the assessment of the three groups of respondents in the assessment of Anyo and Labanan. Considering the findings of the study, the following conclusions were drawn: 1). There is a need to identify the criteria for judging in Anyo and a tedious scrutiny on the rules of the game for labanan. 2) The three groups of respondents have similar views towards the assessment on the overall competitions for anyo that there were no clear technical guidelines for judging the performance of anyo event. 3). The three groups of respondents have similar views towards the assessment on the overall competitions for labanan that there were no clear technical guidelines for majority rule of giving scores in labanan. 4) The Anyo performance should be rated according to effectiveness of techniques and performance of weapon/s that are being used. 5) On other issues and concern towards the rules of competitions, labanan should be addressed in improving rules of competitions, focus on the applications of majority rules for scoring, players shall be given rest interval, a clear guidelines and set a standard qualifications for officiating officials.

Keywords: PASUC Olympics 2015, Arnis rules of competition, Anyo, Labanan, officiating

Procedia PDF Downloads 458
8435 Genotypic Variation in the Germination Performance and Seed Vigor of Safflower (Carthamus tinctorius L.)

Authors: Mehmet Demir Kaya, Engin Gökhan Kulan, Onur İleri, Süleyman Avcı

Abstract:

Due to variation in seed size, shape and oil content of safflower cultivars, germination and emergence performance have been severely influenced by seed characteristics. This study aimed to determine genotypic variation among safflower genotypes for one thousand seed weight, oil content, germination and seed vigor using electrical conductivity (EC) and cold test. In the study, safflower lines ES37-5, ES38-4, ES43-11, ES55-14 and ES58-11 which were developed by single seed selection method, and Dinçer and Remzibey-05 were used as standard varieties. The genotypes were grown under rainfed conditions in Eskişehir, Turkey with four replications. The seeds of each genotype were subjected to standard germination and emergence test at 25°C for 10 days with four replications and 50 seeds per replicate. Electrical conductivity test was performed at 25°C for 24 h to assess the seed vigor. Also, cold test were applied to each safflower genotype at 10°C for 4 days and 25°C for 6 days. Results showed that oil content of the safflower genotypes were different. The highest oil content was determined in ES43-11 with 36.6% while the lowest was 25.9% in ES38-4. Higher germination and emergence rate were obtained from ES55-14 with 96.5% and 73.0%, respectively. There was no significant difference among the safflower genotypes for EC values. Cold test showed that ES43-11 and ES55-14 gave the maximum germination percentages. It was concluded that genotypic factors except for soil and climatic conditions play an important role for determining seed vigor because safflower genotypes grown at the same condition produced various seed vigor values.

Keywords: Carthamus tinctorius L., germination, emergence, cold test, electrical conductivity

Procedia PDF Downloads 370
8434 Recognizing Human Actions by Multi-Layer Growing Grid Architecture

Authors: Z. Gharaee

Abstract:

Recognizing actions performed by others is important in our daily lives since it is necessary for communicating with others in a proper way. We perceive an action by observing the kinematics of motions involved in the performance. We use our experience and concepts to make a correct recognition of the actions. Although building the action concepts is a life-long process, which is repeated throughout life, we are very efficient in applying our learned concepts in analyzing motions and recognizing actions. Experiments on the subjects observing the actions performed by an actor show that an action is recognized after only about two hundred milliseconds of observation. In this study, hierarchical action recognition architecture is proposed by using growing grid layers. The first-layer growing grid receives the pre-processed data of consecutive 3D postures of joint positions and applies some heuristics during the growth phase to allocate areas of the map by inserting new neurons. As a result of training the first-layer growing grid, action pattern vectors are generated by connecting the elicited activations of the learned map. The ordered vector representation layer receives action pattern vectors to create time-invariant vectors of key elicited activations. Time-invariant vectors are sent to second-layer growing grid for categorization. This grid creates the clusters representing the actions. Finally, one-layer neural network developed by a delta rule labels the action categories in the last layer. System performance has been evaluated in an experiment with the publicly available MSR-Action3D dataset. There are actions performed by using different parts of human body: Hand Clap, Two Hands Wave, Side Boxing, Bend, Forward Kick, Side Kick, Jogging, Tennis Serve, Golf Swing, Pick Up and Throw. The growing grid architecture was trained by applying several random selections of generalization test data fed to the system during on average 100 epochs for each training of the first-layer growing grid and around 75 epochs for each training of the second-layer growing grid. The average generalization test accuracy is 92.6%. A comparison analysis between the performance of growing grid architecture and self-organizing map (SOM) architecture in terms of accuracy and learning speed show that the growing grid architecture is superior to the SOM architecture in action recognition task. The SOM architecture completes learning the same dataset of actions in around 150 epochs for each training of the first-layer SOM while it takes 1200 epochs for each training of the second-layer SOM and it achieves the average recognition accuracy of 90% for generalization test data. In summary, using the growing grid network preserves the fundamental features of SOMs, such as topographic organization of neurons, lateral interactions, the abilities of unsupervised learning and representing high dimensional input space in the lower dimensional maps. The architecture also benefits from an automatic size setting mechanism resulting in higher flexibility and robustness. Moreover, by utilizing growing grids the system automatically obtains a prior knowledge of input space during the growth phase and applies this information to expand the map by inserting new neurons wherever there is high representational demand.

Keywords: action recognition, growing grid, hierarchical architecture, neural networks, system performance

Procedia PDF Downloads 157
8433 Atomic Layer Deposition of Metal Oxides on Si/C Materials for the Improved Cycling Stability of High-Capacity Lithium-Ion Batteries

Authors: Philipp Stehle, Dragoljub Vrankovic, Montaha Anjass

Abstract:

Due to its high availability and extremely high specific capacity, silicon (Si) is the most promising anode material for next generation lithium-ion batteries (LIBs). However, Si anodes are suffering from high volume changes during cycling causing unstable solid-electrolyte interface (SEI). One approach for mitigation of these effects is to embed Si particles into a carbon matrix to create silicon/carbon composites (Si/C). These typically show more stable electrochemical performance than bare silicon materials. Nevertheless, the same failure mechanisms mentioned earlier appear in a less pronounced form. In this work, we further improved the cycling performance of two commercially available Si/C materials by coating thin metal oxide films of different thicknesses on the powders via Atomic Layer Deposition (ALD). The coated powders were analyzed via ICP-OES and AFM measurements. Si/C-graphite anodes with automotive-relevant loadings (~3.5 mAh/cm2) were processed out of the materials and tested in half coin cells (HCCs) and full pouch cells (FPCs). During long-term cycling in FPCs, a significant improvement was observed for some of the ALD-coated materials. After 500 cycles, the capacity retention was already up to 10% higher compared to the pristine materials. Cycling of the FPCs continued until they reached a state of health (SOH) of 80%. By this point, up to the triple number of cycles were achieved by ALD-coated compared to pristine anodes. Post-mortem analysis via various methods was carried out to evaluate the differences in SEI formation and thicknesses.

Keywords: silicon anodes, li-ion batteries, atomic layer deposition, silicon-carbon composites, surface coatings

Procedia PDF Downloads 121
8432 On the Existence of Homotopic Mapping Between Knowledge Graphs and Graph Embeddings

Authors: Jude K. Safo

Abstract:

Knowledge Graphs KG) and their relation to Graph Embeddings (GE) represent a unique data structure in the landscape of machine learning (relative to image, text and acoustic data). Unlike the latter, GEs are the only data structure sufficient for representing hierarchically dense, semantic information needed for use-cases like supply chain data and protein folding where the search space exceeds the limits traditional search methods (e.g. page-rank, Dijkstra, etc.). While GEs are effective for compressing low rank tensor data, at scale, they begin to introduce a new problem of ’data retreival’ which we observe in Large Language Models. Notable attempts by transE, TransR and other prominent industry standards have shown a peak performance just north of 57% on WN18 and FB15K benchmarks, insufficient practical industry applications. They’re also limited, in scope, to next node/link predictions. Traditional linear methods like Tucker, CP, PARAFAC and CANDECOMP quickly hit memory limits on tensors exceeding 6.4 million nodes. This paper outlines a topological framework for linear mapping between concepts in KG space and GE space that preserve cardinality. Most importantly we introduce a traceable framework for composing dense linguistic strcutures. We demonstrate performance on WN18 benchmark this model hits. This model does not rely on Large Langauge Models (LLM) though the applications are certainy relevant here as well.

Keywords: representation theory, large language models, graph embeddings, applied algebraic topology, applied knot theory, combinatorics

Procedia PDF Downloads 68
8431 Numerical Performance Evaluation of a Savonius Wind Turbines Using Resistive Torque Modeling

Authors: Guermache Ahmed Chafik, Khelfellah Ismail, Ait-Ali Takfarines

Abstract:

The Savonius vertical axis wind turbine is characterized by sufficient starting torque at low wind speeds, simple design and does not require orientation to the wind direction; however, the developed power is lower than other types of wind turbines such as Darrieus. To increase these performances several studies and researches have been developed, such as optimizing blades shape, using passive controls and also minimizing power losses sources like the resisting torque due to friction. This work aims to estimate the performance of a Savonius wind turbine introducing a User Defined Function to the CFD model analyzing resisting torque. This User Defined Function is developed to simulate the action of the wind speed on the rotor; it receives the moment coefficient as an input to compute the rotational velocity that should be imposed on computational domain rotating regions. The rotational velocity depends on the aerodynamic moment applied on the turbine and the resisting torque, which is considered a linear function. Linking the implemented User Defined Function with the CFD solver allows simulating the real functioning of the Savonius turbine exposed to wind. It is noticed that the wind turbine takes a while to reach the stationary regime where the rotational velocity becomes invariable; at that moment, the tip speed ratio, the moment and power coefficients are computed. To validate this approach, the power coefficient versus tip speed ratio curve is compared with the experimental one. The obtained results are in agreement with the available experimental results.

Keywords: resistant torque modeling, Savonius wind turbine, user-defined function, vertical axis wind turbine performances

Procedia PDF Downloads 155
8430 The Impact of the Enron Scandal on the Reputation of Corporate Social Responsibility Rating Agencies

Authors: Jaballah Jamil

Abstract:

KLD (Peter Kinder, Steve Lydenberg and Amy Domini) research & analytics is an independent intermediary of social performance information that adopts an investor-pay model. KLD rating agency does not have an explicit monitoring on the rated firm which suggests that KLD ratings may not include private informations. Moreover, the incapacity of KLD to predict accurately the extra-financial rating of Enron casts doubt on the reliability of KLD ratings. Therefore, we first investigate whether KLD ratings affect investors' perception by studying the effect of KLD rating changes on firms' financial performances. Second, we study the impact of the Enron scandal on investors' perception of KLD rating changes by comparing the effect of KLD rating changes on firms' financial performances before and after the failure of Enron. We propose an empirical study that relates a number of equally-weighted portfolios returns, excess stock returns and book-to-market ratio to different dimensions of KLD social responsibility ratings. We first find that over the last two decades KLD rating changes influence significantly and negatively stock returns and book-to-market ratio of rated firms. This finding suggests that a raise in corporate social responsibility rating lowers the firm's risk. Second, to assess the Enron scandal's effect on the perception of KLD ratings, we compare the effect of KLD rating changes before and after the Enron scandal. We find that after the Enron scandal this significant effect disappears. This finding supports the view that the Enron scandal annihilates the KLD's effect on Socially Responsible Investors. Therefore, our findings may question results of recent studies that use KLD ratings as a proxy for Corporate Social Responsibility behavior.

Keywords: KLD social rating agency, investors' perception, investment decision, financial performance

Procedia PDF Downloads 439
8429 Enhancing Employee Innovative Behaviours Through Human Resource Wellbeing Practices

Authors: Jarrod Haar, David Brougham

Abstract:

The present study explores the links between supporting employee well-being and the potential benefits to employee performance. We focus on employee innovative work behaviors (IWBs), which have three stages: (1) development, (2) adoption, and (3) implementation of new ideas and work methods. We explore the role of organizational support focusing on employee well-being via High-Performance Work Systems (HPWS). HPWS are HR practices that are designed to enhance employees’ skills, commitment, and ultimately, productivity. HPWS influence employee performance through building their skills, knowledge, and abilities and there is meta-analytic support for firm-level HPWS influencing firm performance, but less attention towards employee outcomes, especially innovation. We explore HPWS-wellbeing being offered (e.g., EAPs, well-being App, etc.) to capture organizational commitment to employee well-being. Under social exchange theory, workers should reciprocate their firm's offering of HPWS-wellbeing with greater efforts towards IWBs. Further, we explore playful work design as a mediator, which represents employees proactively creating work conditions that foster enjoyment/challenge but don’t require any design change to the job itself. We suggest HPWS-wellbeing can encourage employees to become more playful, and ultimately more innovative. Finally, beyond direct effects, we examine whether these relations are similar by gender and ultimately test a moderated mediation model. Using N=1135 New Zealand employees, we established measures with confirmatory factor analysis (CFA), and all measures had good psychometric properties (α>.80). We controlled for age, tenure, education, and hours worked and analyzed data using the PROCESS macro (version 4.2) specifically model 8 (moderated mediation). We analyzed overall IWB, and then again across the three stages. Overall, we find HPWS-wellbeing is significantly related to overall IWBs and the three stages (development, adoption, and implementation) individually. Similarly, HPWS-wellbeing shapes playful work design and playful work design predicts overall IWBs and the three stages individually. It only partially mediates the effects of HPWS-wellbeing, which retains a significant indirect effect. Moderation effects are supported, with males reporting a more significant effect from HPWS-wellbeing on playful work design but not IWB (or any of the three stages) than females. Females report higher playful work design when HPWS-wellbeing is low, but the effects are reversed when HPWS-wellbeing is high (males higher). Thus, males respond stronger under social exchange theory from HPWS-wellbeing, at least towards expressing playful work design. Finally, evidence of moderated mediation effects is found on overall IWBs and the three stages. Males report a significant indirect effect from HPWS-wellbeing on IWB (through playful work design), while female employees report no significant indirect effect. The benefits of playful work design fully account for their IWBs. The models account for small amounts of variance towards playful work design (12%) but larger for IWBs (26%). The study highlights a gap in the literature on HPWS-wellbeing and provides empirical evidence of their importance towards worker innovation. Further, gendered effects suggest these benefits might not be equal. The findings provide useful insights for organizations around how providing HR practices that support employee well-being are important, although how they work for different genders needs further exploration.

Keywords: human resource practices, wellbeing, innovation, playful work design

Procedia PDF Downloads 81
8428 Designing the Maturity Model of Smart Digital Transformation through the Foundation Data Method

Authors: Mohammad Reza Fazeli

Abstract:

Nowadays, the fourth industry, known as the digital transformation of industries, is seen as one of the top subjects in the history of structural revolution, which has led to the high-tech and tactical dominance of the organization. In the face of these profits, the undefined and non-transparent nature of the after-effects of investing in digital transformation has hindered many organizations from attempting this area of this industry. One of the important frameworks in the field of understanding digital transformation in all organizations is the maturity model of digital transformation. This model includes two main parts of digital transformation maturity dimensions and digital transformation maturity stages. Mediating factors of digital maturity and organizational performance at the individual (e.g., motivations, attitudes) and at the organizational level (e.g., organizational culture) should be considered. For successful technology adoption processes, organizational development and human resources must go hand in hand and be supported by a sound communication strategy. Maturity models are developed to help organizations by providing broad guidance and a roadmap for improvement. However, as a result of a systematic review of the literature and its analysis, it was observed that none of the 18 maturity models in the field of digital transformation fully meet all the criteria of appropriateness, completeness, clarity, and objectivity. A maturity assessment framework potentially helps systematize assessment processes that create opportunities for change in processes and organizations enabled by digital initiatives and long-term improvements at the project portfolio level. Cultural characteristics reflecting digital culture are not systematically integrated, and specific digital maturity models for the service sector are less clearly presented. It is also clearly evident that research on the maturity of digital transformation as a holistic concept is scarce and needs more attention in future research.

Keywords: digital transformation, organizational performance, maturity models, maturity assessment

Procedia PDF Downloads 107
8427 Subcontractor Development Practices and Processes: A Conceptual Model for LEED Projects

Authors: Andrea N. Ofori-Boadu

Abstract:

The purpose is to develop a conceptual model of subcontractor development practices and processes that strengthen the integration of subcontractors into construction supply chain systems for improved subcontractor performance on Leadership in Energy and Environmental Design (LEED) certified building projects. The construction management of a LEED project has an important objective of meeting sustainability certification requirements. This is in addition to the typical project management objectives of cost, time, quality, and safety for traditional projects; and, therefore increases the complexity of LEED projects. Considering that construction management organizations rely heavily on subcontractors, poor performance on complex projects such as LEED projects has been largely attributed to the unsatisfactory preparation of subcontractors. Furthermore, the extensive use of unique and non-repetitive short term contracts limits the full integration of subcontractors into construction supply chains and hinders long-term cooperation and benefits that could enhance performance on construction projects. Improved subcontractor development practices are needed to better prepare and manage subcontractors, so that complex objectives can be met or exceeded. While supplier development and supply chain theories and practices for the manufacturing sector have been extensively investigated to address similar challenges, investigations in the construction sector are not that obvious. Consequently, the objective of this research is to investigate effective subcontractor development practices and processes to guide construction management organizations in their development of a strong network of high performing subcontractors. Drawing from foundational supply chain and supplier development theories in the manufacturing sector, a mixed interpretivist and empirical methodology is utilized to assess the body of knowledge within literature for conceptual model development. A self-reporting survey with five-point Likert scale items and open-ended questions is administered to 30 construction professionals to estimate their perceptions of the effectiveness of 37 practices, classified into five subcontractor development categories. Data analysis includes descriptive statistics, weighted means, and t-tests that guide the effectiveness ranking of practices and categories. The results inform the proposed three-phased LEED subcontractor development program model which focuses on preparation, development and implementation, and monitoring. Highly ranked LEED subcontractor pre-qualification, commitment, incentives, evaluation, and feedback practices are perceived as more effective, when compared to practices requiring more direct involvement and linkages between subcontractors and construction management organizations. This is attributed to unfamiliarity, conflicting interests, lack of trust, and resource sharing challenges. With strategic modifications, the recommended practices can be extended to other non-LEED complex projects. Additional research is needed to guide the development of subcontractor development programs that strengthen direct involvement between construction management organizations and their network of high performing subcontractors. Insights from this present research strengthen theoretical foundations to support future research towards more integrated construction supply chains. In the long-term, this would lead to increased performance, profits and client satisfaction.

Keywords: construction management, general contractor, supply chain, sustainable construction

Procedia PDF Downloads 110
8426 Series Network-Structured Inverse Models of Data Envelopment Analysis: Pitfalls and Solutions

Authors: Zohreh Moghaddas, Morteza Yazdani, Farhad Hosseinzadeh

Abstract:

Nowadays, data envelopment analysis (DEA) models featuring network structures have gained widespread usage for evaluating the performance of production systems and activities (Decision-Making Units (DMUs)) across diverse fields. By examining the relationships between the internal stages of the network, these models offer valuable insights to managers and decision-makers regarding the performance of each stage and its impact on the overall network. To further empower system decision-makers, the inverse data envelopment analysis (IDEA) model has been introduced. This model allows the estimation of crucial information for estimating parameters while keeping the efficiency score unchanged or improved, enabling analysis of the sensitivity of system inputs or outputs according to managers' preferences. This empowers managers to apply their preferences and policies on resources, such as inputs and outputs, and analyze various aspects like production, resource allocation processes, and resource efficiency enhancement within the system. The results obtained can be instrumental in making informed decisions in the future. The top result of this study is an analysis of infeasibility and incorrect estimation that may arise in the theory and application of the inverse model of data envelopment analysis with network structures. By addressing these pitfalls, novel protocols are proposed to circumvent these shortcomings effectively. Subsequently, several theoretical and applied problems are examined and resolved through insightful case studies.

Keywords: inverse models of data envelopment analysis, series network, estimation of inputs and outputs, efficiency, resource allocation, sensitivity analysis, infeasibility

Procedia PDF Downloads 51
8425 Modelling and Simulating CO2 Electro-Reduction to Formic Acid Using Microfluidic Electrolytic Cells: The Influence of Bi-Sn Catalyst and 1-Ethyl-3-Methyl Imidazolium Tetra-Fluoroborate Electrolyte on Cell Performance

Authors: Akan C. Offong, E. J. Anthony, Vasilije Manovic

Abstract:

A modified steady-state numerical model is developed for the electrochemical reduction of CO2 to formic acid. The numerical model achieves a CD (current density) (~60 mA/cm2), FE-faradaic efficiency (~98%) and conversion (~80%) for CO2 electro-reduction to formic acid in a microfluidic cell. The model integrates charge and species transport, mass conservation, and momentum with electrochemistry. Specifically, the influences of Bi-Sn based nanoparticle catalyst (on the cathode surface) at different mole fractions and 1-ethyl-3-methyl imidazolium tetra-fluoroborate ([EMIM][BF4]) electrolyte, on CD, FE and CO2 conversion to formic acid is studied. The reaction is carried out at a constant concentration of electrolyte (85% v/v., [EMIM][BF4]). Based on the mass transfer characteristics analysis (concentration contours), mole ratio 0.5:0.5 Bi-Sn catalyst displays the highest CO2 mole consumption in the cathode gas channel. After validating with experimental data (polarisation curves) from literature, extensive simulations reveal performance measure: CD, FE and CO2 conversion. Increasing the negative cathode potential increases the current densities for both formic acid and H2 formations. However, H2 formations are minimal as a result of insufficient hydrogen ions in the ionic liquid electrolyte. Moreover, the limited hydrogen ions have a negative effect on formic acid CD. As CO2 flow rate increases, CD, FE and CO2 conversion increases.

Keywords: carbon dioxide, electro-chemical reduction, ionic liquids, microfluidics, modelling

Procedia PDF Downloads 146
8424 The Research of Hand-Grip Strength for Adults with Intellectual Disability

Authors: Haiu-Lan Chin, Yu-Fen Hsiao, Hua-Ying Chuang, Wei Lee

Abstract:

An adult with intellectual disability generally has insufficient physical activity which is an important factor leading to premature weakness. Studies in recent years on frailty syndrome have accumulated substantial data about indicators of human aging, including unintentional weight loss, self-reported exhaustion, weakness, slow walking speed, and low physical activity. Of these indicators, hand-grip strength can be seen as a predictor of mortality, disability, complications, and increased length of hospital stay. Hand-grip strength in fact provides a comprehensive overview of one’s vitality. The research is about the investigation on hand-grip strength of adults with intellectual disabilities in facilities, institutions and workshops. The participants are 197 male adults (M=39.09±12.85 years old), and 114 female ones (M=35.80±8.2 years old) so far. The aim of the study is to figure out the performance of their hand-grip strength, and initiate the setting of training on hand-grip strength in their daily life which will decrease the weakening on their physical condition. Test items include weight, bone density, basal metabolic rate (BMR), static body balance except hand-grip strength. Hand-grip strength was measured by a hand dynamometer and classified as normal group ( ≧ 30 kg for male and ≧ 20 kg for female) and weak group ( < 30 kg for male, < 20 kg for female)The analysis includes descriptive statistics, and the indicators of grip strength fo the adults with intellectual disability. Though the research is still ongoing and the participants are increasing, the data indicates: (1) The correlation between hand-grip strength and degree of the intellectual disability (p ≦. 001), basal metabolic rate (p ≦ .001), and static body balance (p ≦ .01) as well. Nevertheless, there is no significant correlation between grip strength and basal metabolic rate which had been having significant correlation with hand-grip strength. (2) The difference between male and female subjects in hand-grip strength is significant, the hand-grip strength of male subjects (25.70±12.81 Kg) is much higher than female ones (16.30±8.89 Kg). Compared to the female counterparts, male participants indicate greater individual differences. And the proportion of weakness between male and female subjects is also different. (3) The regression indicates the main factors related to grip strength performance include degree of the intellectual disability, height, static body balance, training and weight sequentially. (4) There is significant difference on both hand-grip and static body balance between participants in facilities and workshops. The study supports the truth about the sex and gender differences in health. Nevertheless, the average hand-grip strength of left hand is higher than right hand in both male and female subjects. Moreover, 71.3% of male subjects and 64.2% of female subjects have better performance in their left hand-grip which is distinctive features especially in low degree of the intellectual disability.

Keywords: adult with intellectual disability, frailty syndrome, grip strength, physical condition

Procedia PDF Downloads 179
8423 Evaluation of Ensemble Classifiers for Intrusion Detection

Authors: M. Govindarajan

Abstract:

One of the major developments in machine learning in the past decade is the ensemble method, which finds highly accurate classifier by combining many moderately accurate component classifiers. In this research work, new ensemble classification methods are proposed with homogeneous ensemble classifier using bagging and heterogeneous ensemble classifier using arcing and their performances are analyzed in terms of accuracy. A Classifier ensemble is designed using Radial Basis Function (RBF) and Support Vector Machine (SVM) as base classifiers. The feasibility and the benefits of the proposed approaches are demonstrated by the means of standard datasets of intrusion detection. The main originality of the proposed approach is based on three main parts: preprocessing phase, classification phase, and combining phase. A wide range of comparative experiments is conducted for standard datasets of intrusion detection. The performance of the proposed homogeneous and heterogeneous ensemble classifiers are compared to the performance of other standard homogeneous and heterogeneous ensemble methods. The standard homogeneous ensemble methods include Error correcting output codes, Dagging and heterogeneous ensemble methods include majority voting, stacking. The proposed ensemble methods provide significant improvement of accuracy compared to individual classifiers and the proposed bagged RBF and SVM performs significantly better than ECOC and Dagging and the proposed hybrid RBF-SVM performs significantly better than voting and stacking. Also heterogeneous models exhibit better results than homogeneous models for standard datasets of intrusion detection. 

Keywords: data mining, ensemble, radial basis function, support vector machine, accuracy

Procedia PDF Downloads 248
8422 Differences in Assessing Hand-Written and Typed Student Exams: A Corpus-Linguistic Study

Authors: Jutta Ransmayr

Abstract:

The digital age has long arrived at Austrian schools, so both society and educationalists demand that digital means should be integrated accordingly to day-to-day school routines. Therefore, the Austrian school-leaving exam (A-levels) can now be written either by hand or by using a computer. However, the choice of writing medium (pen and paper or computer) for written examination papers, which are considered 'high-stakes' exams, raises a number of questions that have not yet been adequately investigated and answered until recently, such as: What effects do the different conditions of text production in the written German A-levels have on the component of normative linguistic accuracy? How do the spelling skills of German A-level papers written with a pen differ from those that the students wrote on the computer? And how is the teacher's assessment related to this? Which practical desiderata for German didactics can be derived from this? In a trilateral pilot project of the Austrian Center for Digital Humanities (ACDH) of the Austrian Academy of Sciences and the University of Vienna in cooperation with the Austrian Ministry of Education and the Council for German Orthography, these questions were investigated. A representative Austrian learner corpus, consisting of around 530 German A-level papers from all over Austria (pen and computer written), was set up in order to subject it to a quantitative (corpus-linguistic and statistical) and qualitative investigation with regard to the spelling and punctuation performance of the high school graduates and the differences between pen- and computer-written papers and their assessments. Relevant studies are currently available mainly from the Anglophone world. These have shown that writing on the computer increases the motivation to write, has positive effects on the length of the text, and, in some cases, also on the quality of the text. Depending on the writing situation and other technical aids, better results in terms of spelling and punctuation could also be found in the computer-written texts as compared to the handwritten ones. Studies also point towards a tendency among teachers to rate handwritten texts better than computer-written texts. In this paper, the first comparable results from the German-speaking area are to be presented. Research results have shown that, on the one hand, there are significant differences between handwritten and computer-written work with regard to performance in orthography and punctuation. On the other hand, the corpus linguistic investigation and the subsequent statistical analysis made it clear that not only the teachers' assessments of the students’ spelling performance vary enormously but also the overall assessments of the exam papers – the factor of the production medium (pen and paper or computer) also seems to play a decisive role.

Keywords: exam paper assessment, pen and paper or computer, learner corpora, linguistics

Procedia PDF Downloads 170
8421 Estimation of Hysteretic Damping in Steel Dual Systems with Buckling Restrained Brace and Moment Resisting Frame

Authors: Seyed Saeid Tabaee, Omid Bahar

Abstract:

Nowadays, using energy dissipation devices has been commonly used in structures. A high rate of energy absorption during earthquakes is the benefit of using such devices, which results in damage reduction of structural elements specifically columns. The hysteretic damping capacity of energy dissipation devices is the key point that it may adversely complicate analysis and design of such structures. This effect may be generally represented by equivalent viscous damping. The equivalent viscous damping may be obtained from the expected hysteretic behavior under the design or maximum considered displacement of a structure. In this paper, the hysteretic damping coefficient of a steel moment resisting frame (MRF), which its performance is enhanced by a buckling restrained brace (BRB) system has been evaluated. Having the foresight of damping fraction between BRB and MRF is inevitable for seismic design procedures like Direct Displacement-Based Design (DDBD) method. This paper presents an approach to calculate the damping fraction for such systems by carrying out the dynamic nonlinear time history analysis (NTHA) under harmonic loading, which is tuned to the natural frequency of the system. Two steel moment frame structures, one equipped with BRB, and the other without BRB are simultaneously studied. The extensive analysis shows that proportion of each system damping fraction may be calculated by its shear story portion. In this way, the contribution of each BRB in the floors and their general contribution in the structural performance may be clearly recognized, in advance.

Keywords: buckling restrained brace, direct displacement based design, dual systems, hysteretic damping, moment resisting frames

Procedia PDF Downloads 434
8420 Performance of the New Laboratory-Based Algorithm for HIV Diagnosis in Southwestern China

Authors: Yanhua Zhao, Chenli Rao, Dongdong Li, Chuanmin Tao

Abstract:

The Chinese Centers for Disease Control and Prevention (CCDC) issued a new laboratory-based algorithm for HIV diagnosis on April 2016, which initially screens with a combination HIV-1/HIV-2 antigen/antibody fourth-generation immunoassay (IA) followed, when reactive, an HIV-1/HIV-2 undifferentiated antibody IA in duplicate. Reactive specimens with concordant results undergo supplemental tests with western blots, or HIV-1 nucleic acid tests (NATs) and non-reactive specimens with discordant results receive HIV-1 NATs or p24 antigen tests or 2-4 weeks follow-up tests. However, little data evaluating the application of the new algorithm have been reported to date. The study was to evaluate the performance of new laboratory-based HIV diagnostic algorithm in an inpatient population of Southwest China over the initial 6 months by compared with the old algorithm. Plasma specimens collected from inpatients from May 1, 2016, to October 31, 2016, are submitted to the laboratory for screening HIV infection performed by both the new HIV testing algorithm and the old version. The sensitivity and specificity of the algorithms and the difference of the categorized numbers of plasmas were calculated. Under the new algorithm for HIV diagnosis, 170 of the total 52 749 plasma specimens were confirmed as positively HIV-infected (0.32%). The sensitivity and specificity of the new algorithm were 100% (170/170) and 100% (52 579/52 579), respectively; while 167 HIV-1 positive specimens were identified by the old algorithm with sensitivity 98.24% (167/170) and 100% (52 579/52 579), respectively. Three acute HIV-1 infections (AHIs) and two early HIV-1 infections (EHIs) were identified by the new algorithm; the former was missed by old procedure. Compared with the old version, the new algorithm produced fewer WB-indeterminate results (2 vs. 16, p = 0.001), which led to fewer follow-up tests. Therefore, the new HIV testing algorithm is more sensitive for detecting acute HIV-1 infections with maintaining the ability to verify the established HIV-1 infections and can dramatically decrease the greater number of WB-indeterminate specimens.

Keywords: algorithm, diagnosis, HIV, laboratory

Procedia PDF Downloads 401
8419 A Framework for Incorporating Non-Linear Degradation of Conductive Adhesive in Environmental Testing

Authors: Kedar Hardikar, Joe Varghese

Abstract:

Conductive adhesives have found wide-ranging applications in electronics industry ranging from fixing a defective conductor on printed circuit board (PCB) attaching an electronic component in an assembly to protecting electronics components by the formation of “Faraday Cage.” The reliability requirements for the conductive adhesive vary widely depending on the application and expected product lifetime. While the conductive adhesive is required to maintain the structural integrity, the electrical performance of the associated sub-assembly can be affected by the degradation of conductive adhesive. The degradation of the adhesive is dependent upon the highly varied use case. The conventional approach to assess the reliability of the sub-assembly involves subjecting it to the standard environmental test conditions such as high-temperature high humidity, thermal cycling, high-temperature exposure to name a few. In order to enable projection of test data and observed failures to predict field performance, systematic development of an acceleration factor between the test conditions and field conditions is crucial. Common acceleration factor models such as Arrhenius model are based on rate kinetics and typically rely on an assumption of linear degradation in time for a given condition and test duration. The application of interest in this work involves conductive adhesive used in an electronic circuit of a capacitive sensor. The degradation of conductive adhesive in high temperature and humidity environment is quantified by the capacitance values. Under such conditions, the use of established models such as Hallberg-Peck model or Eyring Model to predict time to failure in the field typically relies on linear degradation rate. In this particular case, it is seen that the degradation is nonlinear in time and exhibits a square root t dependence. It is also shown that for the mechanism of interest, the presence of moisture is essential, and the dominant mechanism driving the degradation is the diffusion of moisture. In this work, a framework is developed to incorporate nonlinear degradation of the conductive adhesive for the development of an acceleration factor. This method can be extended to applications where nonlinearity in degradation rate can be adequately characterized in tests. It is shown that depending on the expected product lifetime, the use of conventional linear degradation approach can overestimate or underestimate the field performance. This work provides guidelines for suitability of linear degradation approximation for such varied applications

Keywords: conductive adhesives, nonlinear degradation, physics of failure, acceleration factor model.

Procedia PDF Downloads 135
8418 Customer Churn Prediction by Using Four Machine Learning Algorithms Integrating Features Selection and Normalization in the Telecom Sector

Authors: Alanoud Moraya Aldalan, Abdulaziz Almaleh

Abstract:

A crucial component of maintaining a customer-oriented business as in the telecom industry is understanding the reasons and factors that lead to customer churn. Competition between telecom companies has greatly increased in recent years. It has become more important to understand customers’ needs in this strong market of telecom industries, especially for those who are looking to turn over their service providers. So, predictive churn is now a mandatory requirement for retaining those customers. Machine learning can be utilized to accomplish this. Churn Prediction has become a very important topic in terms of machine learning classification in the telecommunications industry. Understanding the factors of customer churn and how they behave is very important to building an effective churn prediction model. This paper aims to predict churn and identify factors of customers’ churn based on their past service usage history. Aiming at this objective, the study makes use of feature selection, normalization, and feature engineering. Then, this study compared the performance of four different machine learning algorithms on the Orange dataset: Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting. Evaluation of the performance was conducted by using the F1 score and ROC-AUC. Comparing the results of this study with existing models has proven to produce better results. The results showed the Gradients Boosting with feature selection technique outperformed in this study by achieving a 99% F1-score and 99% AUC, and all other experiments achieved good results as well.

Keywords: machine learning, gradient boosting, logistic regression, churn, random forest, decision tree, ROC, AUC, F1-score

Procedia PDF Downloads 134
8417 Assessing the Impact of Autonomous Vehicles on Supply Chain Performance – A Case Study of Agri-Food Supply Chain

Authors: Nitish Suvarna, Anjali Awasthi

Abstract:

In an era marked by rapid technological advancements, the integration of Autonomous Vehicles into supply chain networks represents a transformative shift, promising to redefine the paradigms of logistics and transportation. This thesis delves into a comprehensive assessment of the impact of autonomous vehicles on supply chain performance, with a particular focus on network design, operational efficiency, and environmental sustainability. Employing the advanced simulation capabilities of anyLogistix (ALX), the study constructs a digital twin of a conventional supply chain network, encompassing suppliers, production facilities, distribution centers, and customer endpoints. The research methodically integrates Autonomous Vehicles into this intricate network, aiming to unravel the multifaceted effects on transportation logistics including transit times, cost-efficiency, and sustainability. Through simulations and scenarios analysis, the study scrutinizes the operational resilience and adaptability of supply chains in the face of dynamic market conditions and disruptive technologies like Autonomous Vehicles. Furthermore, the thesis undertakes carbon footprint analysis, quantifying the environmental benefits and challenges associated with the adoption of Autonomous Vehicles in supply chain operations. The insights from this research are anticipated to offer a strategic framework for industry stakeholders, guiding the adoption of Autonomous Vehicles to foster a more efficient, responsive, and sustainable supply chain ecosystem. The findings aim to serve as a cornerstone for future research and practical implementations in the realm of intelligent transportation and supply chain management.

Keywords: autonomous vehicle, agri-food supply chain, ALX simulation, anyLogistix

Procedia PDF Downloads 75