Search results for: energy performance certificate EPBD
10549 Atomic Layer Deposition of Metal Oxides on Si/C Materials for the Improved Cycling Stability of High-Capacity Lithium-Ion Batteries
Authors: Philipp Stehle, Dragoljub Vrankovic, Montaha Anjass
Abstract:
Due to its high availability and extremely high specific capacity, silicon (Si) is the most promising anode material for next generation lithium-ion batteries (LIBs). However, Si anodes are suffering from high volume changes during cycling causing unstable solid-electrolyte interface (SEI). One approach for mitigation of these effects is to embed Si particles into a carbon matrix to create silicon/carbon composites (Si/C). These typically show more stable electrochemical performance than bare silicon materials. Nevertheless, the same failure mechanisms mentioned earlier appear in a less pronounced form. In this work, we further improved the cycling performance of two commercially available Si/C materials by coating thin metal oxide films of different thicknesses on the powders via Atomic Layer Deposition (ALD). The coated powders were analyzed via ICP-OES and AFM measurements. Si/C-graphite anodes with automotive-relevant loadings (~3.5 mAh/cm2) were processed out of the materials and tested in half coin cells (HCCs) and full pouch cells (FPCs). During long-term cycling in FPCs, a significant improvement was observed for some of the ALD-coated materials. After 500 cycles, the capacity retention was already up to 10% higher compared to the pristine materials. Cycling of the FPCs continued until they reached a state of health (SOH) of 80%. By this point, up to the triple number of cycles were achieved by ALD-coated compared to pristine anodes. Post-mortem analysis via various methods was carried out to evaluate the differences in SEI formation and thicknesses.Keywords: silicon anodes, li-ion batteries, atomic layer deposition, silicon-carbon composites, surface coatings
Procedia PDF Downloads 12710548 On the Existence of Homotopic Mapping Between Knowledge Graphs and Graph Embeddings
Authors: Jude K. Safo
Abstract:
Knowledge Graphs KG) and their relation to Graph Embeddings (GE) represent a unique data structure in the landscape of machine learning (relative to image, text and acoustic data). Unlike the latter, GEs are the only data structure sufficient for representing hierarchically dense, semantic information needed for use-cases like supply chain data and protein folding where the search space exceeds the limits traditional search methods (e.g. page-rank, Dijkstra, etc.). While GEs are effective for compressing low rank tensor data, at scale, they begin to introduce a new problem of ’data retreival’ which we observe in Large Language Models. Notable attempts by transE, TransR and other prominent industry standards have shown a peak performance just north of 57% on WN18 and FB15K benchmarks, insufficient practical industry applications. They’re also limited, in scope, to next node/link predictions. Traditional linear methods like Tucker, CP, PARAFAC and CANDECOMP quickly hit memory limits on tensors exceeding 6.4 million nodes. This paper outlines a topological framework for linear mapping between concepts in KG space and GE space that preserve cardinality. Most importantly we introduce a traceable framework for composing dense linguistic strcutures. We demonstrate performance on WN18 benchmark this model hits. This model does not rely on Large Langauge Models (LLM) though the applications are certainy relevant here as well.Keywords: representation theory, large language models, graph embeddings, applied algebraic topology, applied knot theory, combinatorics
Procedia PDF Downloads 7210547 Saudi State Arabia’s Struggle for a Post-Rentier Regional Order
Authors: Omair Anas
Abstract:
The Persian Gulf has been in turmoil for a long time since the colonial administration has handed over the role to the small and weak kings and emirs who were assured of protection in return of many economic and security promises to them. The regional order, Saudi Arabia evolved was a rentier regional order secured by an expansion of rentier economy and taking responsibility for much of the expenses of the regional order on behalf of relatively poor countries. The two oil booms helped the Saudi state to expand the 'rentier order' driven stability and bring the countries like Egypt, Jordan, Syria, and Palestine under its tutelage. The disruptive misadventure, however, came with Iran's proclamation of the Islamic Revolution in 1979 which it wanted to be exported to its 'un-Islamic and American puppet' Arab neighbours. For Saudi Arabia, even the challenge presented by the socialist-nationalist Arab dictators like Gamal Abdul Nasser and Hafez Al-Assad was not that much threatening to the Saudi Arabia’s then-defensive realism. In the Arab uprisings, the Gulf monarchies saw a wave of insecurity and Iran found it an opportune time to complete the revolutionary process it could not complete after 1979. An alliance of convenience and ideology between Iran and Islamist groups had the real potential to challenge both Saudi Arabia’s own security and its leadership in the region. The disruptive threat appeared at a time when the Saudi state had already sensed an impending crisis originating from the shifts in the energy markets. Low energy prices, declining global demands, and huge investments in alternative energy resources required Saudi Arabia to rationalize its economy according to changing the global political economy. The domestic Saudi reforms remained gradual until the death of King Abdullah in 2015. What is happening now in the region, the Qatar crisis, the Lebanon crisis and the Saudi-Iranian proxy war in Iraq, Syria, and Yemen has combined three immediate objectives, rationalising Saudi economy and most importantly, the resetting the Saudi royal power for Saudi Arabia’s longest-serving future King Mohammad bin Salman. The Saudi King perhaps has no time to wait and watch the power vacuum appearing because of Iran’s expansionist foreign policy. The Saudis appear to be employing an offensive realism by advancing a pro-active regional policy to counter Iran’s threatening influence amid disappearing Western security from the region. As the Syrian civil war is coming to a compromised end with ceding much ground to Iran-controlled militias, Hezbollah and Al-Hashad, the Saudi state has lost much ground in these years and the threat from Iranian proxies is more than a reality, more clearly in Bahrain, Iraq, Syria, and Yemen. This paper attempts to analyse the changing Saudi behaviour in the region, which, the author understands, is shaped by an offensive-realist approach towards finding a favourable security environment for the Saudi-led regional order, a post-rentier order perhaps.Keywords: terrorism, Saudi Arabia, Rentier State, gulf crisis
Procedia PDF Downloads 14010546 Numerical Performance Evaluation of a Savonius Wind Turbines Using Resistive Torque Modeling
Authors: Guermache Ahmed Chafik, Khelfellah Ismail, Ait-Ali Takfarines
Abstract:
The Savonius vertical axis wind turbine is characterized by sufficient starting torque at low wind speeds, simple design and does not require orientation to the wind direction; however, the developed power is lower than other types of wind turbines such as Darrieus. To increase these performances several studies and researches have been developed, such as optimizing blades shape, using passive controls and also minimizing power losses sources like the resisting torque due to friction. This work aims to estimate the performance of a Savonius wind turbine introducing a User Defined Function to the CFD model analyzing resisting torque. This User Defined Function is developed to simulate the action of the wind speed on the rotor; it receives the moment coefficient as an input to compute the rotational velocity that should be imposed on computational domain rotating regions. The rotational velocity depends on the aerodynamic moment applied on the turbine and the resisting torque, which is considered a linear function. Linking the implemented User Defined Function with the CFD solver allows simulating the real functioning of the Savonius turbine exposed to wind. It is noticed that the wind turbine takes a while to reach the stationary regime where the rotational velocity becomes invariable; at that moment, the tip speed ratio, the moment and power coefficients are computed. To validate this approach, the power coefficient versus tip speed ratio curve is compared with the experimental one. The obtained results are in agreement with the available experimental results.Keywords: resistant torque modeling, Savonius wind turbine, user-defined function, vertical axis wind turbine performances
Procedia PDF Downloads 16110545 The Impact of the Enron Scandal on the Reputation of Corporate Social Responsibility Rating Agencies
Authors: Jaballah Jamil
Abstract:
KLD (Peter Kinder, Steve Lydenberg and Amy Domini) research & analytics is an independent intermediary of social performance information that adopts an investor-pay model. KLD rating agency does not have an explicit monitoring on the rated firm which suggests that KLD ratings may not include private informations. Moreover, the incapacity of KLD to predict accurately the extra-financial rating of Enron casts doubt on the reliability of KLD ratings. Therefore, we first investigate whether KLD ratings affect investors' perception by studying the effect of KLD rating changes on firms' financial performances. Second, we study the impact of the Enron scandal on investors' perception of KLD rating changes by comparing the effect of KLD rating changes on firms' financial performances before and after the failure of Enron. We propose an empirical study that relates a number of equally-weighted portfolios returns, excess stock returns and book-to-market ratio to different dimensions of KLD social responsibility ratings. We first find that over the last two decades KLD rating changes influence significantly and negatively stock returns and book-to-market ratio of rated firms. This finding suggests that a raise in corporate social responsibility rating lowers the firm's risk. Second, to assess the Enron scandal's effect on the perception of KLD ratings, we compare the effect of KLD rating changes before and after the Enron scandal. We find that after the Enron scandal this significant effect disappears. This finding supports the view that the Enron scandal annihilates the KLD's effect on Socially Responsible Investors. Therefore, our findings may question results of recent studies that use KLD ratings as a proxy for Corporate Social Responsibility behavior.Keywords: KLD social rating agency, investors' perception, investment decision, financial performance
Procedia PDF Downloads 44310544 Enhancing Employee Innovative Behaviours Through Human Resource Wellbeing Practices
Authors: Jarrod Haar, David Brougham
Abstract:
The present study explores the links between supporting employee well-being and the potential benefits to employee performance. We focus on employee innovative work behaviors (IWBs), which have three stages: (1) development, (2) adoption, and (3) implementation of new ideas and work methods. We explore the role of organizational support focusing on employee well-being via High-Performance Work Systems (HPWS). HPWS are HR practices that are designed to enhance employees’ skills, commitment, and ultimately, productivity. HPWS influence employee performance through building their skills, knowledge, and abilities and there is meta-analytic support for firm-level HPWS influencing firm performance, but less attention towards employee outcomes, especially innovation. We explore HPWS-wellbeing being offered (e.g., EAPs, well-being App, etc.) to capture organizational commitment to employee well-being. Under social exchange theory, workers should reciprocate their firm's offering of HPWS-wellbeing with greater efforts towards IWBs. Further, we explore playful work design as a mediator, which represents employees proactively creating work conditions that foster enjoyment/challenge but don’t require any design change to the job itself. We suggest HPWS-wellbeing can encourage employees to become more playful, and ultimately more innovative. Finally, beyond direct effects, we examine whether these relations are similar by gender and ultimately test a moderated mediation model. Using N=1135 New Zealand employees, we established measures with confirmatory factor analysis (CFA), and all measures had good psychometric properties (α>.80). We controlled for age, tenure, education, and hours worked and analyzed data using the PROCESS macro (version 4.2) specifically model 8 (moderated mediation). We analyzed overall IWB, and then again across the three stages. Overall, we find HPWS-wellbeing is significantly related to overall IWBs and the three stages (development, adoption, and implementation) individually. Similarly, HPWS-wellbeing shapes playful work design and playful work design predicts overall IWBs and the three stages individually. It only partially mediates the effects of HPWS-wellbeing, which retains a significant indirect effect. Moderation effects are supported, with males reporting a more significant effect from HPWS-wellbeing on playful work design but not IWB (or any of the three stages) than females. Females report higher playful work design when HPWS-wellbeing is low, but the effects are reversed when HPWS-wellbeing is high (males higher). Thus, males respond stronger under social exchange theory from HPWS-wellbeing, at least towards expressing playful work design. Finally, evidence of moderated mediation effects is found on overall IWBs and the three stages. Males report a significant indirect effect from HPWS-wellbeing on IWB (through playful work design), while female employees report no significant indirect effect. The benefits of playful work design fully account for their IWBs. The models account for small amounts of variance towards playful work design (12%) but larger for IWBs (26%). The study highlights a gap in the literature on HPWS-wellbeing and provides empirical evidence of their importance towards worker innovation. Further, gendered effects suggest these benefits might not be equal. The findings provide useful insights for organizations around how providing HR practices that support employee well-being are important, although how they work for different genders needs further exploration.Keywords: human resource practices, wellbeing, innovation, playful work design
Procedia PDF Downloads 8410543 Designing the Maturity Model of Smart Digital Transformation through the Foundation Data Method
Authors: Mohammad Reza Fazeli
Abstract:
Nowadays, the fourth industry, known as the digital transformation of industries, is seen as one of the top subjects in the history of structural revolution, which has led to the high-tech and tactical dominance of the organization. In the face of these profits, the undefined and non-transparent nature of the after-effects of investing in digital transformation has hindered many organizations from attempting this area of this industry. One of the important frameworks in the field of understanding digital transformation in all organizations is the maturity model of digital transformation. This model includes two main parts of digital transformation maturity dimensions and digital transformation maturity stages. Mediating factors of digital maturity and organizational performance at the individual (e.g., motivations, attitudes) and at the organizational level (e.g., organizational culture) should be considered. For successful technology adoption processes, organizational development and human resources must go hand in hand and be supported by a sound communication strategy. Maturity models are developed to help organizations by providing broad guidance and a roadmap for improvement. However, as a result of a systematic review of the literature and its analysis, it was observed that none of the 18 maturity models in the field of digital transformation fully meet all the criteria of appropriateness, completeness, clarity, and objectivity. A maturity assessment framework potentially helps systematize assessment processes that create opportunities for change in processes and organizations enabled by digital initiatives and long-term improvements at the project portfolio level. Cultural characteristics reflecting digital culture are not systematically integrated, and specific digital maturity models for the service sector are less clearly presented. It is also clearly evident that research on the maturity of digital transformation as a holistic concept is scarce and needs more attention in future research.Keywords: digital transformation, organizational performance, maturity models, maturity assessment
Procedia PDF Downloads 11310542 Series Network-Structured Inverse Models of Data Envelopment Analysis: Pitfalls and Solutions
Authors: Zohreh Moghaddas, Morteza Yazdani, Farhad Hosseinzadeh
Abstract:
Nowadays, data envelopment analysis (DEA) models featuring network structures have gained widespread usage for evaluating the performance of production systems and activities (Decision-Making Units (DMUs)) across diverse fields. By examining the relationships between the internal stages of the network, these models offer valuable insights to managers and decision-makers regarding the performance of each stage and its impact on the overall network. To further empower system decision-makers, the inverse data envelopment analysis (IDEA) model has been introduced. This model allows the estimation of crucial information for estimating parameters while keeping the efficiency score unchanged or improved, enabling analysis of the sensitivity of system inputs or outputs according to managers' preferences. This empowers managers to apply their preferences and policies on resources, such as inputs and outputs, and analyze various aspects like production, resource allocation processes, and resource efficiency enhancement within the system. The results obtained can be instrumental in making informed decisions in the future. The top result of this study is an analysis of infeasibility and incorrect estimation that may arise in the theory and application of the inverse model of data envelopment analysis with network structures. By addressing these pitfalls, novel protocols are proposed to circumvent these shortcomings effectively. Subsequently, several theoretical and applied problems are examined and resolved through insightful case studies.Keywords: inverse models of data envelopment analysis, series network, estimation of inputs and outputs, efficiency, resource allocation, sensitivity analysis, infeasibility
Procedia PDF Downloads 5710541 Modelling and Simulating CO2 Electro-Reduction to Formic Acid Using Microfluidic Electrolytic Cells: The Influence of Bi-Sn Catalyst and 1-Ethyl-3-Methyl Imidazolium Tetra-Fluoroborate Electrolyte on Cell Performance
Authors: Akan C. Offong, E. J. Anthony, Vasilije Manovic
Abstract:
A modified steady-state numerical model is developed for the electrochemical reduction of CO2 to formic acid. The numerical model achieves a CD (current density) (~60 mA/cm2), FE-faradaic efficiency (~98%) and conversion (~80%) for CO2 electro-reduction to formic acid in a microfluidic cell. The model integrates charge and species transport, mass conservation, and momentum with electrochemistry. Specifically, the influences of Bi-Sn based nanoparticle catalyst (on the cathode surface) at different mole fractions and 1-ethyl-3-methyl imidazolium tetra-fluoroborate ([EMIM][BF4]) electrolyte, on CD, FE and CO2 conversion to formic acid is studied. The reaction is carried out at a constant concentration of electrolyte (85% v/v., [EMIM][BF4]). Based on the mass transfer characteristics analysis (concentration contours), mole ratio 0.5:0.5 Bi-Sn catalyst displays the highest CO2 mole consumption in the cathode gas channel. After validating with experimental data (polarisation curves) from literature, extensive simulations reveal performance measure: CD, FE and CO2 conversion. Increasing the negative cathode potential increases the current densities for both formic acid and H2 formations. However, H2 formations are minimal as a result of insufficient hydrogen ions in the ionic liquid electrolyte. Moreover, the limited hydrogen ions have a negative effect on formic acid CD. As CO2 flow rate increases, CD, FE and CO2 conversion increases.Keywords: carbon dioxide, electro-chemical reduction, ionic liquids, microfluidics, modelling
Procedia PDF Downloads 15010540 Anticorrosive Performances of “Methyl Ester Sulfonates” Biodegradable Anionic Synthetized Surfactants on Carbon Steel X 70 in Oilfields
Authors: Asselah Amel, Affif Chaouche M'yassa, Toudji Amira, Tazerouti Amel
Abstract:
This study covers two aspects ; the biodegradability and the performances in corrosion inhibition of a series of synthetized surfactants namely Φ- sodium methyl ester sulfonates (Φ-MES: C₁₂-MES, C₁₄-MES and C₁₆-MES. The biodegradability of these organic compounds was studied using the respirometric method, ‘the standard ISO 9408’. Degradation was followed by analysis of dissolved oxygen using the dissolved oxygen meter over 28 days and the results were compared with that of sodium dodecyl sulphate (SDS). The inoculum used consists of activated sludge taken from the aeration basin of the biological wastewater treatment plant in the city of Boumerdes-Algeria. In addition, the anticorrosive performances of Φ-MES surfactants on a carbon steel "X70" were evaluated in an injection water from a well of Hassi R'mel region- Algeria, known as Baremian water, and are compared to sodium dodecyl sulphate. Two technics, the weight loss and the linear polarization resistance corrosion rate (LPR) are used allowing to investigate the relationships between the concentrations of these synthetized surfactants and their surface properties, surface coverage and inhibition efficiency. Various adsorption isotherm models were used to characterize the nature of adsorption and explain their mechanism. The results show that the MES anionic surfactants was readily biodegradable, degrading faster than SDS, about 88% for C₁₂-MES compared to 66% for the SDS. The length of their carbon chain affects their biodegradability; the longer the chain, the lower the biodegradability. The inhibition efficiency of these surfactants is around 78.4% for C₁₂-MES, 76.60% for C₁₄-MES and 98.19% for C₁₆-MES and increases with their concentration and reaches a maximum value around their critical micelle concentrations ( CMCs). Scanning electron microscopy coupled to energy dispersive X-ray spectroscopy allowed to the visualization of a good adhesion of the protective film formed by the surfactants to the surface of the steel. The studied surfactants show the Langmuirian behavior from which the thermodynamic parameters as adsorption constant (Kads), standard free energy of adsorption (〖∆G〗_ads^0 ) are determined. Interaction of the surfactants with steel surface have involved physisorptions.Keywords: corrosion, surfactants, adsorption, adsorption isotherems
Procedia PDF Downloads 10010539 The Research of Hand-Grip Strength for Adults with Intellectual Disability
Authors: Haiu-Lan Chin, Yu-Fen Hsiao, Hua-Ying Chuang, Wei Lee
Abstract:
An adult with intellectual disability generally has insufficient physical activity which is an important factor leading to premature weakness. Studies in recent years on frailty syndrome have accumulated substantial data about indicators of human aging, including unintentional weight loss, self-reported exhaustion, weakness, slow walking speed, and low physical activity. Of these indicators, hand-grip strength can be seen as a predictor of mortality, disability, complications, and increased length of hospital stay. Hand-grip strength in fact provides a comprehensive overview of one’s vitality. The research is about the investigation on hand-grip strength of adults with intellectual disabilities in facilities, institutions and workshops. The participants are 197 male adults (M=39.09±12.85 years old), and 114 female ones (M=35.80±8.2 years old) so far. The aim of the study is to figure out the performance of their hand-grip strength, and initiate the setting of training on hand-grip strength in their daily life which will decrease the weakening on their physical condition. Test items include weight, bone density, basal metabolic rate (BMR), static body balance except hand-grip strength. Hand-grip strength was measured by a hand dynamometer and classified as normal group ( ≧ 30 kg for male and ≧ 20 kg for female) and weak group ( < 30 kg for male, < 20 kg for female)The analysis includes descriptive statistics, and the indicators of grip strength fo the adults with intellectual disability. Though the research is still ongoing and the participants are increasing, the data indicates: (1) The correlation between hand-grip strength and degree of the intellectual disability (p ≦. 001), basal metabolic rate (p ≦ .001), and static body balance (p ≦ .01) as well. Nevertheless, there is no significant correlation between grip strength and basal metabolic rate which had been having significant correlation with hand-grip strength. (2) The difference between male and female subjects in hand-grip strength is significant, the hand-grip strength of male subjects (25.70±12.81 Kg) is much higher than female ones (16.30±8.89 Kg). Compared to the female counterparts, male participants indicate greater individual differences. And the proportion of weakness between male and female subjects is also different. (3) The regression indicates the main factors related to grip strength performance include degree of the intellectual disability, height, static body balance, training and weight sequentially. (4) There is significant difference on both hand-grip and static body balance between participants in facilities and workshops. The study supports the truth about the sex and gender differences in health. Nevertheless, the average hand-grip strength of left hand is higher than right hand in both male and female subjects. Moreover, 71.3% of male subjects and 64.2% of female subjects have better performance in their left hand-grip which is distinctive features especially in low degree of the intellectual disability.Keywords: adult with intellectual disability, frailty syndrome, grip strength, physical condition
Procedia PDF Downloads 18310538 Evaluation of Ensemble Classifiers for Intrusion Detection
Authors: M. Govindarajan
Abstract:
One of the major developments in machine learning in the past decade is the ensemble method, which finds highly accurate classifier by combining many moderately accurate component classifiers. In this research work, new ensemble classification methods are proposed with homogeneous ensemble classifier using bagging and heterogeneous ensemble classifier using arcing and their performances are analyzed in terms of accuracy. A Classifier ensemble is designed using Radial Basis Function (RBF) and Support Vector Machine (SVM) as base classifiers. The feasibility and the benefits of the proposed approaches are demonstrated by the means of standard datasets of intrusion detection. The main originality of the proposed approach is based on three main parts: preprocessing phase, classification phase, and combining phase. A wide range of comparative experiments is conducted for standard datasets of intrusion detection. The performance of the proposed homogeneous and heterogeneous ensemble classifiers are compared to the performance of other standard homogeneous and heterogeneous ensemble methods. The standard homogeneous ensemble methods include Error correcting output codes, Dagging and heterogeneous ensemble methods include majority voting, stacking. The proposed ensemble methods provide significant improvement of accuracy compared to individual classifiers and the proposed bagged RBF and SVM performs significantly better than ECOC and Dagging and the proposed hybrid RBF-SVM performs significantly better than voting and stacking. Also heterogeneous models exhibit better results than homogeneous models for standard datasets of intrusion detection.Keywords: data mining, ensemble, radial basis function, support vector machine, accuracy
Procedia PDF Downloads 25210537 Seismic Isolation of Existing Masonry Buildings: Recent Case Studies in Italy
Authors: Stefano Barone
Abstract:
Seismic retrofit of buildings through base isolation represents a consolidated protection strategy against earthquakes. It consists in decoupling the ground motion from that of the structure and introducing anti-seismic devices at the base of the building, characterized by high horizontal flexibility and medium/high dissipative capacity. This allows to protect structural elements and to limit damages to non-structural ones. For these reasons, full functionality is guaranteed after an earthquake event. Base isolation is applied extensively to both new and existing buildings. For the latter, it usually does not require any interruption of the structure use and occupants evacuation, a special advantage for strategic buildings such as schools, hospitals, and military buildings. This paper describes the application of seismic isolation to three existing masonry buildings in Italy: Villa “La Maddalena” in Macerata (Marche region), “Giacomo Matteotti” and “Plinio Il Giovane” school buildings in Perugia (Umbria region). The seismic hazard of the sites is characterized by a Peak Ground Acceleration (PGA) of 0.213g-0.287g for the Life Safety Limit State and between 0.271g-0.359g for the Collapse Limit State. All the buildings are isolated with a combination of free sliders type TETRON® CD with confined elastomeric disk and anti-seismic rubber isolators type ISOSISM® HDRB to reduce the eccentricity between the center of mass and stiffness, thus limiting torsional effects during a seismic event. The isolation systems are designed to lengthen the original period of vibration (i.e., without isolators) by at least three times and to guarantee medium/high levels of energy dissipation capacity (equivalent viscous damping between 12.5% and 16%). This allows the structures to resist 100% of the seismic design action. This article shows the performances of the supplied anti-seismic devices with particular attention to the experimental dynamic response. Finally, a special focus is given to the main site activities required to isolate a masonry building.Keywords: retrofit, masonry buildings, seismic isolation, energy dissipation, anti-seismic devices
Procedia PDF Downloads 8010536 Dependence of the Photoelectric Exponent on the Source Spectrum of the CT
Authors: Rezvan Ravanfar Haghighi, V. C. Vani, Suresh Perumal, Sabyasachi Chatterjee, Pratik Kumar
Abstract:
X-ray attenuation coefficient [µ(E)] of any substance, for energy (E), is a sum of the contributions from the Compton scattering [ μCom(E)] and photoelectric effect [µPh(E)]. In terms of the, electron density (ρe) and the effective atomic number (Zeff) we have µCom(E) is proportional to [(ρe)fKN(E)] while µPh(E) is proportional to [(ρeZeffx)/Ey] with fKN(E) being the Klein-Nishina formula, with x and y being the exponents for photoelectric effect. By taking the sample's HU at two different excitation voltages (V=V1, V2) of the CT machine, we can solve for X=ρe, Y=ρeZeffx from these two independent equations, as is attempted in DECT inversion. Since µCom(E) and µPh(E) are both energy dependent, the coefficients of inversion are also dependent on (a) the source spectrum S(E,V) and (b) the detector efficiency D(E) of the CT machine. In the present paper we tabulate these coefficients of inversion for different practical manifestations of S(E,V) and D(E). The HU(V) values from the CT follow: <µ(V)>=<µw(V)>[1+HU(V)/1000] where the subscript 'w' refers to water and the averaging process <….> accounts for the source spectrum S(E,V) and the detector efficiency D(E). Linearity of μ(E) with respect to X and Y implies that (a) <µ(V)> is a linear combination of X and Y and (b) for inversion, X and Y can be written as linear combinations of two independent observations <µ(V1)>, <µ(V2)> with V1≠V2. These coefficients of inversion would naturally depend upon S(E, V) and D(E). We numerically investigate this dependence for some practical cases, by taking V = 100 , 140 kVp, as are used for cardiological investigations. The S(E,V) are generated by using the Boone-Seibert source spectrum, being superposed on aluminium filters of different thickness lAl with 7mm≤lAl≤12mm and the D(E) is considered to be that of a typical Si[Li] solid state and GdOS scintilator detector. In the values of X and Y, found by using the calculated inversion coefficients, errors are below 2% for data with solutions of glycerol, sucrose and glucose. For low Zeff materials like propionic acid, Zeffx is overestimated by 20% with X being within1%. For high Zeffx materials like KOH the value of Zeffx is underestimated by 22% while the error in X is + 15%. These imply that the source may have additional filtering than the aluminium filter specified by the manufacturer. Also it is found that the difference in the values of the inversion coefficients for the two types of detectors is negligible. The type of the detector does not affect on the DECT inversion algorithm to find the unknown chemical characteristic of the scanned materials. The effect of the source should be considered as an important factor to calculate the coefficients of inversion.Keywords: attenuation coefficient, computed tomography, photoelectric effect, source spectrum
Procedia PDF Downloads 40410535 Differences in Assessing Hand-Written and Typed Student Exams: A Corpus-Linguistic Study
Authors: Jutta Ransmayr
Abstract:
The digital age has long arrived at Austrian schools, so both society and educationalists demand that digital means should be integrated accordingly to day-to-day school routines. Therefore, the Austrian school-leaving exam (A-levels) can now be written either by hand or by using a computer. However, the choice of writing medium (pen and paper or computer) for written examination papers, which are considered 'high-stakes' exams, raises a number of questions that have not yet been adequately investigated and answered until recently, such as: What effects do the different conditions of text production in the written German A-levels have on the component of normative linguistic accuracy? How do the spelling skills of German A-level papers written with a pen differ from those that the students wrote on the computer? And how is the teacher's assessment related to this? Which practical desiderata for German didactics can be derived from this? In a trilateral pilot project of the Austrian Center for Digital Humanities (ACDH) of the Austrian Academy of Sciences and the University of Vienna in cooperation with the Austrian Ministry of Education and the Council for German Orthography, these questions were investigated. A representative Austrian learner corpus, consisting of around 530 German A-level papers from all over Austria (pen and computer written), was set up in order to subject it to a quantitative (corpus-linguistic and statistical) and qualitative investigation with regard to the spelling and punctuation performance of the high school graduates and the differences between pen- and computer-written papers and their assessments. Relevant studies are currently available mainly from the Anglophone world. These have shown that writing on the computer increases the motivation to write, has positive effects on the length of the text, and, in some cases, also on the quality of the text. Depending on the writing situation and other technical aids, better results in terms of spelling and punctuation could also be found in the computer-written texts as compared to the handwritten ones. Studies also point towards a tendency among teachers to rate handwritten texts better than computer-written texts. In this paper, the first comparable results from the German-speaking area are to be presented. Research results have shown that, on the one hand, there are significant differences between handwritten and computer-written work with regard to performance in orthography and punctuation. On the other hand, the corpus linguistic investigation and the subsequent statistical analysis made it clear that not only the teachers' assessments of the students’ spelling performance vary enormously but also the overall assessments of the exam papers – the factor of the production medium (pen and paper or computer) also seems to play a decisive role.Keywords: exam paper assessment, pen and paper or computer, learner corpora, linguistics
Procedia PDF Downloads 17410534 Rapid Processing Techniques Applied to Sintered Nickel Battery Technologies for Utility Scale Applications
Authors: J. D. Marinaccio, I. Mabbett, C. Glover, D. Worsley
Abstract:
Through use of novel modern/rapid processing techniques such as screen printing and Near-Infrared (NIR) radiative curing, process time for the sintering of sintered nickel plaques, applicable to alkaline nickel battery chemistries, has been drastically reduced from in excess of 200 minutes with conventional convection methods to below 2 minutes using NIR curing methods. Steps have also been taken to remove the need for forming gas as a reducing agent by implementing carbon as an in-situ reducing agent, within the ink formulation.Keywords: batteries, energy, iron, nickel, storage
Procedia PDF Downloads 44410533 The Effects of Qigong Exercise Intervention on the Cognitive Function in Aging Adults
Authors: D. Y. Fong, C. Y. Kuo, Y. T. Chiang, W. C. Lin
Abstract:
Objectives: Qigong is an ancient Chinese practice in pursuit of a healthier body and a more peaceful mindset. It emphasizes on the restoration of vital energy (Qi) in body, mind, and spirit. The practice is the combination of gentle movements and mild breathing which help the doers reach the condition of tranquility. On account of the features of Qigong, first, we use cross-sectional methodology to compare the differences among the varied levels of Qigong practitioners on cognitive function with event-related potential (ERP) and electroencephalography (EEG). Second, we use the longitudinal methodology to explore the effects on the Qigong trainees for pretest and posttest on ERP and EEG. Current study adopts Attentional Network Test (ANT) task to examine the participants’ cognitive function, and aging-related researches demonstrated a declined tread on the cognition in older adults and exercise might ameliorate the deterioration. Qigong exercise integrates physical posture (muscle strength), breathing technique (aerobic ability) and focused intention (attention) that researchers hypothesize it might improve the cognitive function in aging adults. Method: Sixty participants were involved in this study, including 20 young adults (21.65±2.41 y) with normal physical activity (YA), 20 Qigong experts (60.69 ± 12.42 y) with over 7 years Qigong practice experience (QE), and 20 normal and healthy adults (52.90±12.37 y) with no Qigong practice experience as experimental group (EG). The EG participants took Qigong classes 2 times a week and 2 hours per time for 24 weeks with the purpose of examining the effect of Qigong intervention on cognitive function. ANT tasks (alert network, orient network, and executive control) were adopted to evaluate participants’ cognitive function via ERP’s P300 components and P300 amplitude topography. Results: Behavioral data: 1.The reaction time (RT) of YA is faster than the other two groups, and EG was faster than QE in the cue and flanker conditions of ANT task. 2. The RT of posttest was faster than pretest in EG in the cue and flanker conditions. 3. No difference among the three groups on orient, alert, and execute control networks. ERP data: 1. P300 amplitude detection in QE was larger than EG at Fz electrode in orient, alert, and execute control networks. 2. P300 amplitude in EG was larger at pretest than posttest on the orient network. 3. P300 Latency revealed no difference among the three groups in the three networks. Conclusion: Taken together these findings, they provide neuro-electrical evidence that older adults involved in Qigong practice may develop a more overall compensatory mechanism and also benefit the performance of behavior.Keywords: Qigong, cognitive function, aging, event-related potential (ERP)
Procedia PDF Downloads 39710532 Performance of the New Laboratory-Based Algorithm for HIV Diagnosis in Southwestern China
Authors: Yanhua Zhao, Chenli Rao, Dongdong Li, Chuanmin Tao
Abstract:
The Chinese Centers for Disease Control and Prevention (CCDC) issued a new laboratory-based algorithm for HIV diagnosis on April 2016, which initially screens with a combination HIV-1/HIV-2 antigen/antibody fourth-generation immunoassay (IA) followed, when reactive, an HIV-1/HIV-2 undifferentiated antibody IA in duplicate. Reactive specimens with concordant results undergo supplemental tests with western blots, or HIV-1 nucleic acid tests (NATs) and non-reactive specimens with discordant results receive HIV-1 NATs or p24 antigen tests or 2-4 weeks follow-up tests. However, little data evaluating the application of the new algorithm have been reported to date. The study was to evaluate the performance of new laboratory-based HIV diagnostic algorithm in an inpatient population of Southwest China over the initial 6 months by compared with the old algorithm. Plasma specimens collected from inpatients from May 1, 2016, to October 31, 2016, are submitted to the laboratory for screening HIV infection performed by both the new HIV testing algorithm and the old version. The sensitivity and specificity of the algorithms and the difference of the categorized numbers of plasmas were calculated. Under the new algorithm for HIV diagnosis, 170 of the total 52 749 plasma specimens were confirmed as positively HIV-infected (0.32%). The sensitivity and specificity of the new algorithm were 100% (170/170) and 100% (52 579/52 579), respectively; while 167 HIV-1 positive specimens were identified by the old algorithm with sensitivity 98.24% (167/170) and 100% (52 579/52 579), respectively. Three acute HIV-1 infections (AHIs) and two early HIV-1 infections (EHIs) were identified by the new algorithm; the former was missed by old procedure. Compared with the old version, the new algorithm produced fewer WB-indeterminate results (2 vs. 16, p = 0.001), which led to fewer follow-up tests. Therefore, the new HIV testing algorithm is more sensitive for detecting acute HIV-1 infections with maintaining the ability to verify the established HIV-1 infections and can dramatically decrease the greater number of WB-indeterminate specimens.Keywords: algorithm, diagnosis, HIV, laboratory
Procedia PDF Downloads 40410531 A Framework for Incorporating Non-Linear Degradation of Conductive Adhesive in Environmental Testing
Authors: Kedar Hardikar, Joe Varghese
Abstract:
Conductive adhesives have found wide-ranging applications in electronics industry ranging from fixing a defective conductor on printed circuit board (PCB) attaching an electronic component in an assembly to protecting electronics components by the formation of “Faraday Cage.” The reliability requirements for the conductive adhesive vary widely depending on the application and expected product lifetime. While the conductive adhesive is required to maintain the structural integrity, the electrical performance of the associated sub-assembly can be affected by the degradation of conductive adhesive. The degradation of the adhesive is dependent upon the highly varied use case. The conventional approach to assess the reliability of the sub-assembly involves subjecting it to the standard environmental test conditions such as high-temperature high humidity, thermal cycling, high-temperature exposure to name a few. In order to enable projection of test data and observed failures to predict field performance, systematic development of an acceleration factor between the test conditions and field conditions is crucial. Common acceleration factor models such as Arrhenius model are based on rate kinetics and typically rely on an assumption of linear degradation in time for a given condition and test duration. The application of interest in this work involves conductive adhesive used in an electronic circuit of a capacitive sensor. The degradation of conductive adhesive in high temperature and humidity environment is quantified by the capacitance values. Under such conditions, the use of established models such as Hallberg-Peck model or Eyring Model to predict time to failure in the field typically relies on linear degradation rate. In this particular case, it is seen that the degradation is nonlinear in time and exhibits a square root t dependence. It is also shown that for the mechanism of interest, the presence of moisture is essential, and the dominant mechanism driving the degradation is the diffusion of moisture. In this work, a framework is developed to incorporate nonlinear degradation of the conductive adhesive for the development of an acceleration factor. This method can be extended to applications where nonlinearity in degradation rate can be adequately characterized in tests. It is shown that depending on the expected product lifetime, the use of conventional linear degradation approach can overestimate or underestimate the field performance. This work provides guidelines for suitability of linear degradation approximation for such varied applicationsKeywords: conductive adhesives, nonlinear degradation, physics of failure, acceleration factor model.
Procedia PDF Downloads 14010530 Dynamic Thermomechanical Behavior of Adhesively Bonded Composite Joints
Authors: Sonia Sassi, Mostapha Tarfaoui, Hamza Benyahia
Abstract:
Composite materials are increasingly being used as a substitute for metallic materials in many technological applications like aeronautics, aerospace, marine and civil engineering applications. For composite materials, the thermomechanical response evolves with the strain rate. The energy balance equation for anisotropic, elastic materials includes heat source terms that govern the conversion of some of the kinetic work into heat. The remainder contributes to the stored energy creating the damage process in the composite material. In this paper, we investigate the bulk thermomechanical behavior of adhesively-bonded composite assemblies to quantitatively asses the temperature rise which accompanies adiabatic deformations. In particular, adhesively bonded joints in glass/vinylester composite material are subjected to in-plane dynamic loads under a range of strain rates. Dynamic thermomechanical behavior of this material is investigated using compression Split Hopkinson Pressure Bars (SHPB) coupled with a high speed infrared camera and a high speed camera to measure in real time the dynamic behavior, the damage kinetic and the temperature variation in the material. The interest of using high speed IR camera is in order to view in real time the evolution of heat dissipation in the material when damage occurs. But, this technique does not produce thermal values in correlation with the stress-strain curves of composite material because of its high time response in comparison with the dynamic test time. For this reason, the authors revisit the application of specific thermocouples placed on the surface of the material to ensure the real thermal measurements under dynamic loading using small thermocouples. Experiments with dynamically loaded material show that the thermocouples record temperatures values with a short typical rise time as a result of the conversion of kinetic work into heat during compression test. This results show that small thermocouples can be used to provide an important complement to other noncontact techniques such as the high speed infrared camera. Significant temperature rise was observed in in-plane compression tests especially under high strain rates. During the tests, it has been noticed that sudden temperature rise occur when macroscopic damage occur. This rise in temperature is linked to the rate of damage. The more serve the damage is, a higher localized temperature is detected. This shows the strong relationship between the occurrence of damage and induced heat dissipation. For the case of the in plane tests, the damage takes place more abruptly as the strain rate is increased. The difference observed in the obtained thermomechanical response in plane compression is explained only by the difference in the damage process being active during the compression tests. In this study, we highlighted the dependence of the thermomechanical response on the strain rate of bonded specimens. The effect of heat dissipation of this material cannot hence be ignored and should be taken into account when defining damage models during impact loading.Keywords: adhesively-bonded composite joints, damage, dynamic compression tests, energy balance, heat dissipation, SHPB, thermomechanical behavior
Procedia PDF Downloads 21710529 Customer Churn Prediction by Using Four Machine Learning Algorithms Integrating Features Selection and Normalization in the Telecom Sector
Authors: Alanoud Moraya Aldalan, Abdulaziz Almaleh
Abstract:
A crucial component of maintaining a customer-oriented business as in the telecom industry is understanding the reasons and factors that lead to customer churn. Competition between telecom companies has greatly increased in recent years. It has become more important to understand customers’ needs in this strong market of telecom industries, especially for those who are looking to turn over their service providers. So, predictive churn is now a mandatory requirement for retaining those customers. Machine learning can be utilized to accomplish this. Churn Prediction has become a very important topic in terms of machine learning classification in the telecommunications industry. Understanding the factors of customer churn and how they behave is very important to building an effective churn prediction model. This paper aims to predict churn and identify factors of customers’ churn based on their past service usage history. Aiming at this objective, the study makes use of feature selection, normalization, and feature engineering. Then, this study compared the performance of four different machine learning algorithms on the Orange dataset: Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting. Evaluation of the performance was conducted by using the F1 score and ROC-AUC. Comparing the results of this study with existing models has proven to produce better results. The results showed the Gradients Boosting with feature selection technique outperformed in this study by achieving a 99% F1-score and 99% AUC, and all other experiments achieved good results as well.Keywords: machine learning, gradient boosting, logistic regression, churn, random forest, decision tree, ROC, AUC, F1-score
Procedia PDF Downloads 13710528 Development of a Computer Based, Nutrition and Fitness Programme and Its Effect on Nutritional Status and Fitness of Obese Adults
Authors: Richa Soni, Vibha Bhatnagar, N. K. Jain
Abstract:
This study was conducted to develop a computer mediated programme for weight management and physical fitness and examining its efficacy in reducing weight and improving physical fitness in obese adults. A user friendly, computer based programme was developed to provide a simple, quick, easy and user-friendly method of assessing energy balance at individual level. The programme had four main sections viz. personal Profile, know about your weight, fitness and food exchange list. The computer programme was developed to provide facilities of creating individual profile, tracking meal and physical activities, suggesting nutritional and exercise requirements, planning calorie specific menus, keeping food diaries and revising the diet and exercise plans if needed. The programme was also providing information on obesity, underweight, physical fitness. An exhaustive food exchange list was also given in the programme to assist user to make right food choice decisions. The developed programme was evaluated by a panel of 15 experts comprising endocrinologists, nutritionists and diet counselors. Suggestions given by the experts were paned down and the entire programme was modified in light of suggestions given by the panel members and was reevaluated by the same panel of experts. For assessing the impact of the programme 22 obese subjects were selected purposively and randomly assigned to intervention group (n=12) and no information control group. (n=10). The programme group was asked to strictly follow the programme for one month. Significant reduction in the intake of energy, fat and carbohydrates was observed while intake of fruits, green leafy vegetables was increased. The programme was also found to be effective in reducing body weight, body fat percent and body fat mass whereas total body water and physical fitness scores improved significantly. There was no significant alteration observed in any parameters in the control group.Keywords: body composition, body weight, computer programme, physical fitness
Procedia PDF Downloads 29010527 Assessing the Impact of Autonomous Vehicles on Supply Chain Performance – A Case Study of Agri-Food Supply Chain
Authors: Nitish Suvarna, Anjali Awasthi
Abstract:
In an era marked by rapid technological advancements, the integration of Autonomous Vehicles into supply chain networks represents a transformative shift, promising to redefine the paradigms of logistics and transportation. This thesis delves into a comprehensive assessment of the impact of autonomous vehicles on supply chain performance, with a particular focus on network design, operational efficiency, and environmental sustainability. Employing the advanced simulation capabilities of anyLogistix (ALX), the study constructs a digital twin of a conventional supply chain network, encompassing suppliers, production facilities, distribution centers, and customer endpoints. The research methodically integrates Autonomous Vehicles into this intricate network, aiming to unravel the multifaceted effects on transportation logistics including transit times, cost-efficiency, and sustainability. Through simulations and scenarios analysis, the study scrutinizes the operational resilience and adaptability of supply chains in the face of dynamic market conditions and disruptive technologies like Autonomous Vehicles. Furthermore, the thesis undertakes carbon footprint analysis, quantifying the environmental benefits and challenges associated with the adoption of Autonomous Vehicles in supply chain operations. The insights from this research are anticipated to offer a strategic framework for industry stakeholders, guiding the adoption of Autonomous Vehicles to foster a more efficient, responsive, and sustainable supply chain ecosystem. The findings aim to serve as a cornerstone for future research and practical implementations in the realm of intelligent transportation and supply chain management.Keywords: autonomous vehicle, agri-food supply chain, ALX simulation, anyLogistix
Procedia PDF Downloads 8010526 Weakly Solving Kalah Game Using Artificial Intelligence and Game Theory
Authors: Hiba El Assibi
Abstract:
This study aims to weakly solve Kalah, a two-player board game, by developing a start-to-finish winning strategy using an optimized Minimax algorithm with Alpha-Beta Pruning. In weakly solving Kalah, our focus is on creating an optimal strategy from the game's beginning rather than analyzing every possible position. The project will explore additional enhancements like symmetry checking and code optimizations to speed up the decision-making process. This approach is expected to give insights into efficient strategy formulation in board games and potentially help create games with a fair distribution of outcomes. Furthermore, this research provides a unique perspective on human versus Artificial Intelligence decision-making in strategic games. By comparing the AI-generated optimal moves with human choices, we can explore how seemingly advantageous moves can, in the long run, be harmful, thereby offering a deeper understanding of strategic thinking and foresight in games. Moreover, this paper discusses the evaluation of our strategy against existing methods, providing insights on performance and computational efficiency. We also discuss the scalability of our approach to the game, considering different board sizes (number of pits and stones) and rules (different variations) and studying how that affects performance and complexity. The findings have potential implications for the development of AI applications in strategic game planning, enhancing our understanding of human cognitive processes in game settings, and offer insights into creating balanced and engaging game experiences.Keywords: minimax, alpha beta pruning, transposition tables, weakly solving, game theory
Procedia PDF Downloads 5810525 Direct-Displacement Based Design for Buildings with Non-Linear Viscous Dampers
Authors: Kelly F. Delgado-De Agrela, Sonia E. Ruiz, Marco A. Santos-Santiago
Abstract:
An approach is proposed for the design of regular buildings equipped with non-linear viscous dissipating devices. The approach is based on a direct-displacement seismic design method which satisfies seismic performance objectives. The global system involved is formed by structural regular moment frames capable of supporting gravity and lateral loads with elastic response behavior plus a set of non-linear viscous dissipating devices which reduce the structural seismic response. The dampers are characterized by two design parameters: (1) a positive real exponent α which represents the non-linearity of the damper, and (2) the damping coefficient C of the device, whose constitutive force-velocity law is given by F=Cvᵃ, where v is the velocity between the ends of the damper. The procedure is carried out using a substitute structure. Two limits states are verified: serviceability and near collapse. The reduction of the spectral ordinates by the additional damping assumed in the design process and introduced to the structure by the viscous non-linear dampers is performed according to a damping reduction factor. For the design of the non-linear damper system, the real velocity is considered instead of the pseudo-velocity. The proposed design methodology is applied to an 8-story steel moment frame building equipped with non-linear viscous dampers, located in intermediate soil zone of Mexico City, with a dominant period Tₛ = 1s. In order to validate the approach, nonlinear static analyses and nonlinear time history analyses are performed.Keywords: based design, direct-displacement based design, non-linear viscous dampers, performance design
Procedia PDF Downloads 19610524 Quantitative Analysis of (+)-Catechin and (-)-Epicatechin in Pentace burmanica Stem Bark by HPLC
Authors: Thidarat Duangyod, Chanida Palanuvej, Nijsiri Ruangrungsi
Abstract:
Pentace burmanica Kurz., belonging to the Malvaceae family, is commonly used for anti-diarrhea in Thai traditional medicine. A method for quantification of (+)-catechin and (-)-epicatechin in P. burmanica stem bark from 12 different Thailand markets by reverse-phase high performance liquid chromatography (HPLC) was investigated and validated. The analysis was performed by a Shimadzu DGU-20A3 HPLC equipped with a Shimadzu SPD-M20A photo diode array detector. The separation was accomplished with an Inersil ODS-3 column (5 µm x 4.6 x 250 mm) using 0.1% formic acid in water (A) and 0.1% formic acid in acetonitrile (B) as mobile phase at the flow rate of 1 ml/min. The isocratic was set at 20% B for 15 min and the column temperature was maintained at 40 ºC. The detection was at the wavelength of 280 nm. Both (+)-catechin and (-)-epicatechin existed in the ethanolic extract of P. burmanica stem bark. The content of (-)-epicatechin was found as 59.74 ± 1.69 µg/mg of crude extract. In contrast, the quantitation of (+)-catechin content was omitted because of its small amount. The method was linear over a range of 5-200 µg/ml with good coefficients (r2 > 0.99) for (+)-catechin and (-)-epicatechin. Limit of detection values were found to be 4.80 µg/ml for (+)-catechin and 5.14 µg/ml for (-)-epicatechin. Limit of quantitation of (+)-catechin and (-)-epicatechin were of 14.54 µg/ml and 15.57 µg/ml respectively. Good repeatability and intermediate precision (%RSD < 3) were found in this study. The average recoveries of both (+)-catechin and (-)-epicatechin were obtained with good recovery in the range of 91.11 – 97.02% and 88.53 – 93.78%, respectively, with the %RSD less than 2. The peak purity indices of catechins were more than 0.99. The results suggested that HPLC method proved to be precise and accurate and the method can be conveniently used for (+)-catechin and (-)-epicatechin determination in ethanolic extract of P. burmanica stem bark. Moreover, the stem bark of P. burmanica was found to be a rich source of (-)-epicatechin.Keywords: pentace burmanica, (+)-catechin, (-)-epicatechin, high performance liquid chromatography
Procedia PDF Downloads 45710523 Simulation of the Large Hadrons Collisions Using Monte Carlo Tools
Authors: E. Al Daoud
Abstract:
In many cases, theoretical treatments are available for models for which there is no perfect physical realization. In this situation, the only possible test for an approximate theoretical solution is to compare with data generated from a computer simulation. In this paper, Monte Carlo tools are used to study and compare the elementary particles models. All the experiments are implemented using 10000 events, and the simulated energy is 13 TeV. The mean and the curves of several variables are calculated for each model using MadAnalysis 5. Anomalies in the results can be seen in the muons masses of the minimal supersymmetric standard model and the two Higgs doublet model.Keywords: Feynman rules, hadrons, Lagrangian, Monte Carlo, simulation
Procedia PDF Downloads 32110522 Laser - Ultrasonic Method for the Measurement of Residual Stresses in Metals
Authors: Alexander A. Karabutov, Natalia B. Podymova, Elena B. Cherepetskaya
Abstract:
The theoretical analysis is carried out to get the relation between the ultrasonic wave velocity and the value of residual stresses. The laser-ultrasonic method is developed to evaluate the residual stresses and subsurface defects in metals. The method is based on the laser thermooptical excitation of longitudinal ultrasonic wave sand their detection by a broadband piezoelectric detector. A laser pulse with the time duration of 8 ns of the full width at half of maximum and with the energy of 300 µJ is absorbed in a thin layer of the special generator that is inclined relative to the object under study. The non-uniform heating of the generator causes the formation of a broadband powerful pulse of longitudinal ultrasonic waves. It is shown that the temporal profile of this pulse is the convolution of the temporal envelope of the laser pulse and the profile of the in-depth distribution of the heat sources. The ultrasonic waves reach the surface of the object through the prism that serves as an acoustic duct. At the interface ‚laser-ultrasonic transducer-object‘ the conversion of the most part of the longitudinal wave energy takes place into the shear, subsurface longitudinal and Rayleigh waves. They spread within the subsurface layer of the studied object and are detected by the piezoelectric detector. The electrical signal that corresponds to the detected acoustic signal is acquired by an analog-to-digital converter and when is mathematically processed and visualized with a personal computer. The distance between the generator and the piezodetector as well as the spread times of acoustic waves in the acoustic ducts are the characteristic parameters of the laser-ultrasonic transducer and are determined using the calibration samples. There lative precision of the measurement of the velocity of longitudinal ultrasonic waves is 0.05% that corresponds to approximately ±3 m/s for the steels of conventional quality. This precision allows one to determine the mechanical stress in the steel samples with the minimal detection threshold of approximately 22.7 MPa. The results are presented for the measured dependencies of the velocity of longitudinal ultrasonic waves in the samples on the values of the applied compression stress in the range of 20-100 MPa.Keywords: laser-ultrasonic method, longitudinal ultrasonic waves, metals, residual stresses
Procedia PDF Downloads 32810521 Biochar Affects Compressive Strength of Portland Cement Composites: A Meta-Analysis
Authors: Zhihao Zhao, Ali El-Nagger, Johnson Kau, Chris Olson, Douglas Tomlinson, Scott X. Chang
Abstract:
One strategy to reduce CO₂ emissions from cement production is to reduce the amount of Portland cement produced by replacing it with supplementary cementitious materials (SCMs). Biochar is a potential SCM that is an eco-friendly and stable porous pyrolytic material. However, the effects of biochar addition on the performances of Portland cement composites are not fully understood. This meta-analysis investigated the impact of biochar addition on the 7- and 28-day compressive strength of Portland cement composites based on 606 paired observations. Biochar feedstock type, pyrolysis conditions, pre-treatments and modifications, biochar dosage, and curing type all influenced the compressive strength of Portland cement composites. Biochars obtained from plant-based feedstocks (except rice and hardwood) improved the 28-day compressive strength of Portland cement composites by 3-13%. Biochars produced at pyrolysis temperatures higher than 450 °C, with a heating rate of around 10 °C/min, increased the 28-day compressive strength more effectively. Furthermore, the addition of biochars with small particle sizes increased the compressive strength of Portland cement composites by 2-7% compared to those without biochar addition. Biochar dosage of < 2.5% of the binder weight enhanced both compressive strengths and common curing methods maintained the effect of biochar addition. However, when mixing the cement, adding fine and coarse aggregates such as sand and gravel affects the concrete and mortar's compressive strength, diminishing the effect of biochar addition and making the biochar effect nonsignificant. We conclude that appropriate biochar addition could maintain or enhance the mechanical performance of Portland cement composites, and future research should explore the mechanisms of biochar effects on the performance of cement composites.Keywords: biochar, Portland cement, constructure, compressive strength, meta-analysis
Procedia PDF Downloads 7310520 Effect of Fuel Type on Design Parameters and Atomization Process for Pressure Swirl Atomizer and Dual Orifice Atomizer for High Bypass Turbofan Engine
Authors: Mohamed K. Khalil, Mohamed S. Ragab
Abstract:
Atomizers are used in many engineering applications including diesel engines, petrol engines and spray combustion in furnaces as well as gas turbine engines. These atomizers are used to increase the specific surface area of the fuel, which achieve a high rate of fuel mixing and evaporation. In all combustion systems reduction in mean drop size is a challenge which has many advantages since it leads to rapid and easier ignition, higher volumetric heat release rate, wider burning range and lower exhaust concentrations of the pollutant emissions. Pressure atomizers have a different configuration for design such as swirl atomizer (simplex), dual orifice, spill return, plain orifice, duplex and fan spray. Simplex pressure atomizers are the most common type of all. Among all types of atomizers, pressure swirl types resemble a special category since they differ in quality of atomization, the reliability of operation, simplicity of construction and low expenditure of energy. But, the disadvantages of these atomizers are that they require very high injection pressure and have low discharge coefficient owing to the fact that the air core covers the majority of the atomizer orifice. To overcome these problems, dual orifice atomizer was designed. This paper proposes a detailed mathematical model design procedure for both pressure swirl atomizer (Simplex) and dual orifice atomizer, examines the effects of varying fuel type and makes a clear comparison between the two types. Using five types of fuel (JP-5, JA1, JP-4, Diesel and Bio-Diesel) as a case study, reveal the effect of changing fuel type and its properties on atomizers design and spray characteristics. Which effect on combustion process parameters; Sauter Mean Diameter (SMD), spray cone angle and sheet thickness with varying the discharge coefficient from 0.27 to 0.35 during takeoff for high bypass turbofan engines. The spray atomizer performance of the pressure swirl fuel injector was compared to the dual orifice fuel injector at the same differential pressure and discharge coefficient using Excel. The results are analyzed and handled to form the final reliability results for fuel injectors in high bypass turbofan engines. The results show that the Sauter Mean Diameter (SMD) in dual orifice atomizer is larger than Sauter Mean Diameter (SMD) in pressure swirl atomizer, the film thickness (h) in dual orifice atomizer is less than the film thickness (h) in pressure swirl atomizer. The Spray Cone Angle (α) in pressure swirl atomizer is larger than Spray Cone Angle (α) in dual orifice atomizer.Keywords: gas turbine engines, atomization process, Sauter mean diameter, JP-5
Procedia PDF Downloads 171