Search results for: mathematics performance
790 Development of Perovskite Quantum Dots Light Emitting Diode by Dual-Source Evaporation
Authors: Antoine Dumont, Weiji Hong, Zheng-Hong Lu
Abstract:
Light emitting diodes (LEDs) are steadily becoming the new standard for luminescent display devices because of their energy efficiency and relatively low cost, and the purity of the light they emit. Our research focuses on the optical properties of the lead halide perovskite CsPbBr₃ and its family that is showing steadily improving performances in LEDs and solar cells. The objective of this work is to investigate CsPbBr₃ as an emitting layer made by physical vapor deposition instead of the usual solution-processed perovskites, for use in LEDs. The deposition in vacuum eliminates any risk of contaminants as well as the necessity for the use of chemical ligands in the synthesis of quantum dots. Initial results show the versatility of the dual-source evaporation method, which allowed us to create different phases in bulk form by altering the mole ratio or deposition rate of CsBr and PbBr₂. The distinct phases Cs₄PbBr₆, CsPbBr₃ and CsPb₂Br₅ – confirmed through XPS (x-ray photoelectron spectroscopy) and X-ray diffraction analysis – have different optical properties and morphologies that can be used for specific applications in optoelectronics. We are particularly focused on the blue shift expected from quantum dots (QDs) and the stability of the perovskite in this form. We already obtained proof of the formation of QDs through our dual source evaporation method with electron microscope imaging and photoluminescence testing, which we understand is a first in the community. We also incorporated the QDs in an LED structure to test the electroluminescence and the effect on performance and have already observed a significant wavelength shift. The goal is to reach 480nm after shifting from the original 528nm bulk emission. The hole transport layer (HTL) material onto which the CsPbBr₃ is evaporated is a critical part of this study as the surface energy interaction dictates the behaviour of the QD growth. A thorough study to determine the optimal HTL is in progress. A strong blue shift for a typically green emitting material like CsPbBr₃ would eliminate the necessity of using blue emitting Cl-based perovskite compounds and could prove to be more stable in a QD structure. The final aim is to make a perovskite QD LED with strong blue luminescence, fabricated through a dual-source evaporation technique that could be scalable to industry level, making this device a viable and cost-effective alternative to current commercial LEDs.Keywords: material physics, perovskite, light emitting diode, quantum dots, high vacuum deposition, thin film processing
Procedia PDF Downloads 161789 Effect of Pre-bonding Storage Period on Laser-treated Al Surfaces
Authors: Rio Hirakawa, Christian Gundlach, Sven Hartwig
Abstract:
In recent years, the use of aluminium has further expanded and is expected to replace steel in the future as vehicles become lighter and more recyclable in order to reduce greenhouse gas (GHG) emissions and improve fuel economy. In line with this, structures and components are becoming increasingly multi-material, with different materials, including aluminium, being used in combination to improve mechanical utility and performance. A common method of assembling dissimilar materials is mechanical fastening, but it has several drawbacks, such as increased manufacturing processes and the influence of substrate-specific mechanical properties. Adhesive bonding and fusion bonding are methods that overcome the above disadvantages. In these two joining methods, surface pre-treatment of the substrate is always necessary to ensure the strength and durability of the joint. Previous studies have shown that laser surface treatment improves the strength and durability of the joint. Yan et al. showed that laser surface treatment of aluminium alloys changes α-Al2O3 in the oxide layer to γ-Al2O3. As γ-Al2O3 has a large specific surface area, is very porous and chemically active, laser-treated aluminium surfaces are expected to undergo physico-chemical changes over time and adsorb moisture and organic substances from the air or storage atmosphere. The impurities accumulated on the laser-treated surface may be released at the adhesive and bonding interface by the heat input to the bonding system during the joining phase, affecting the strength and durability of the joint. However, only a few studies have discussed the effect of such storage periods on laser-treated surfaces. This paper, therefore, investigates the ageing of laser-treated aluminium alloy surfaces through thermal analysis, electrochemical analysis and microstructural observations.AlMg3 of 0.5 mm and 1.5 mm thickness was cut using a water-jet cutting machine, cleaned and degreased with isopropanol and surface pre-treated with a pulsed fibre laser at 1060 nm wavelength, 70 W maximum power and 55 kHz repetition frequency. The aluminium surface was then analysed using SEM, thermogravimetric analysis (TGA), Fourier transform infrared spectroscopy (FTIR) and cyclic voltammetry (CV) after storage in air for various periods ranging from one day to several months TGA and FTIR analysed impurities adsorbed on the aluminium surface, while CV revealed changes in the true electrochemically active surface area. SEM also revealed visual changes on the treated surface. In summary, the changes in the laser-treated aluminium surface with storage time were investigated, and the final results were used to determine the appropriate storage period.Keywords: laser surface treatment, pre-treatment, adhesion, bonding, corrosion, durability, dissimilar material interface, automotive, aluminium alloys
Procedia PDF Downloads 80788 Colored Image Classification Using Quantum Convolutional Neural Networks Approach
Authors: Farina Riaz, Shahab Abdulla, Srinjoy Ganguly, Hajime Suzuki, Ravinesh C. Deo, Susan Hopkins
Abstract:
Recently, quantum machine learning has received significant attention. For various types of data, including text and images, numerous quantum machine learning (QML) models have been created and are being tested. Images are exceedingly complex data components that demand more processing power. Despite being mature, classical machine learning still has difficulties with big data applications. Furthermore, quantum technology has revolutionized how machine learning is thought of, by employing quantum features to address optimization issues. Since quantum hardware is currently extremely noisy, it is not practicable to run machine learning algorithms on it without risking the production of inaccurate results. To discover the advantages of quantum versus classical approaches, this research has concentrated on colored image data. Deep learning classification models are currently being created on Quantum platforms, but they are still in a very early stage. Black and white benchmark image datasets like MNIST and Fashion MINIST have been used in recent research. MNIST and CIFAR-10 were compared for binary classification, but the comparison showed that MNIST performed more accurately than colored CIFAR-10. This research will evaluate the performance of the QML algorithm on the colored benchmark dataset CIFAR-10 to advance QML's real-time applicability. However, deep learning classification models have not been developed to compare colored images like Quantum Convolutional Neural Network (QCNN) to determine how much it is better to classical. Only a few models, such as quantum variational circuits, take colored images. The methodology adopted in this research is a hybrid approach by using penny lane as a simulator. To process the 10 classes of CIFAR-10, the image data has been translated into grey scale and the 28 × 28-pixel image containing 10,000 test and 50,000 training images were used. The objective of this work is to determine how much the quantum approach can outperform a classical approach for a comprehensive dataset of color images. After pre-processing 50,000 images from a classical computer, the QCNN model adopted a hybrid method and encoded the images into a quantum simulator for feature extraction using quantum gate rotations. The measurements were carried out on the classical computer after the rotations were applied. According to the results, we note that the QCNN approach is ~12% more effective than the traditional classical CNN approaches and it is possible that applying data augmentation may increase the accuracy. This study has demonstrated that quantum machine and deep learning models can be relatively superior to the classical machine learning approaches in terms of their processing speed and accuracy when used to perform classification on colored classes.Keywords: CIFAR-10, quantum convolutional neural networks, quantum deep learning, quantum machine learning
Procedia PDF Downloads 129787 The Labyrinth - Circular Choral Chant of Dithyramb in the 7th BC, Mirroring the Conjuction of the Planets and the Milky Way Circle
Authors: Kleopatra Chatzigiosi
Abstract:
The paper delves into the spatial and mythological examination of the choral chant of Dithyramb in the 7th BC, its connections to Dionysus, and its role in the origin of drama, exploring harmonious and symbolic aspects of early Greek culture. The primary aim is to analyze the development of Dithyramb in relation to harmonic systems and early musical scales, linking them to circular time and celestial movements. Additionally, the study seeks to unveil the mythological ties of Dithyramb with ancient rituals worshipping Mother Earth Cybele. The methodology involves researching etymology and mythology related to Dithyramb based on Pindar's works and proposing a harmonious design for the performance space of Dithyramb through harmonic spirals inspired by ancient practices. Ιt is also included a comparative study with similar choral traditions from other ancient cultures, providing a broader context for the findings of the work. The research uncovers the symbolic significance of Dithyramb as a dramatized representation of harmonic phenomena, leading to human deification within a context of Sacred Architecture, highlighting the intricate connections between music, rituals, and divine worship in ancient Greek culture. The study enriches understanding of the harmonic and symbolic underpinnings of ancient Greek choral traditions, shedding light on the complex interplay between music, mythology, and ritual practices in the development of early theatrical performances. Data was collected through an in-depth analysis of ancient texts, specifically Pindar's Dithyrambs, to trace the etymology and mythological origins of Dithyramb and its associated symbolism. The analysis involved scrutinizing ancient sources to draw connections between Dithyramb, harmonic systems, celestial movements, and mythological narratives, culminating in a comprehensive exploration of the cultural and symbolic significance of this choral tradition. The study addresses how the choral chant of Dithyramb evolved harmoniously within the ancient Greek cultural framework, its connections to celestial phenomena and ritual practices, and the symbolic implications of its mythological associations within a sacred architectural context. The research illuminates the profound cultural, symbolic, and harmonic dimensions of the choral chant of Dithyramb, offering valuable insights into the intersections between music, mythology, and ritual in ancient Greece, enriching scholarly understanding of early theatrical traditions.Keywords: circular choral chant of dithyramb, “exarchon”( leader), great “eniautos” (year), harmony labyrinth
Procedia PDF Downloads 21786 Comparing Deep Architectures for Selecting Optimal Machine Translation
Authors: Despoina Mouratidis, Katia Lida Kermanidis
Abstract:
Machine translation (MT) is a very important task in Natural Language Processing (NLP). MT evaluation is crucial in MT development, as it constitutes the means to assess the success of an MT system, and also helps improve its performance. Several methods have been proposed for the evaluation of (MT) systems. Some of the most popular ones in automatic MT evaluation are score-based, such as the BLEU score, and others are based on lexical similarity or syntactic similarity between the MT outputs and the reference involving higher-level information like part of speech tagging (POS). This paper presents a language-independent machine learning framework for classifying pairwise translations. This framework uses vector representations of two machine-produced translations, one from a statistical machine translation model (SMT) and one from a neural machine translation model (NMT). The vector representations consist of automatically extracted word embeddings and string-like language-independent features. These vector representations used as an input to a multi-layer neural network (NN) that models the similarity between each MT output and the reference, as well as between the two MT outputs. To evaluate the proposed approach, a professional translation and a "ground-truth" annotation are used. The parallel corpora used are English-Greek (EN-GR) and English-Italian (EN-IT), in the educational domain and of informal genres (video lecture subtitles, course forum text, etc.) that are difficult to be reliably translated. They have tested three basic deep learning (DL) architectures to this schema: (i) fully-connected dense, (ii) Convolutional Neural Network (CNN), and (iii) Long Short-Term Memory (LSTM). Experiments show that all tested architectures achieved better results when compared against those of some of the well-known basic approaches, such as Random Forest (RF) and Support Vector Machine (SVM). Better accuracy results are obtained when LSTM layers are used in our schema. In terms of a balance between the results, better accuracy results are obtained when dense layers are used. The reason for this is that the model correctly classifies more sentences of the minority class (SMT). For a more integrated analysis of the accuracy results, a qualitative linguistic analysis is carried out. In this context, problems have been identified about some figures of speech, as the metaphors, or about certain linguistic phenomena, such as per etymology: paronyms. It is quite interesting to find out why all the classifiers led to worse accuracy results in Italian as compared to Greek, taking into account that the linguistic features employed are language independent.Keywords: machine learning, machine translation evaluation, neural network architecture, pairwise classification
Procedia PDF Downloads 132785 In Support of Sustainable Water Resources Development in the Lower Mekong River Basin: Development of Guidelines for Transboundary Environmental Impact Assessment
Authors: Kongmeng Ly
Abstract:
The management of transboundary river basins across developing countries, such as the Lower Mekong River Basin (LMB), is frequently challenging given the development and conservation divergences of the basin countries. Driven by needs to sustain economic performance and reduce poverty, the LMB countries (Cambodia, Lao PDR, Thailand, Viet Nam) are embarking on significant land use changes in the form hydropower dam, to fulfill their energy requirements. This pathway could lead to irreversible changes to the ecosystem of the Mekong River, if not properly managed. Given the uncertain trade-offs of hydropower development and operation, the Lower Mekong River Basin Countries through the technical support of the Mekong River Commission (MRC) Secretariat embarked on decade long the development of Technical Guidelines for Transboundary Environmental Impact Assessment. Through a series of workshops, seminars, national and regional consultations, and pilot studies and further development following the recommendations generated through legal and institutional reviews undertaken over two decades period, the LMB Countries jointly adopted the MRC Technical Guidelines for Transboundary Environmental Impact Assessment (TbEIA Guidelines). These guidelines were developed with particular regard to the experience gained from MRC supported consultations and technical reviews of the Xayaburi Dam Project, Don Sahong Hydropower Project, Pak Beng Hydropower Project, and lessons learned from the Srepok River and Se San River case studies commissioned by the MRC under the generous supports of development partners around the globe. As adopted, the TbEIA Guidelines have been designed as a supporting mechanism to the national EIA legislation, processes and systems in each Member Country. In recognition of the already agreed mechanisms, the TbEIA Guidelines build on and supplement the agreements stipulated in the 1995 Agreement on the Cooperation for the Sustainable Development of the Mekong River Basin and its Procedural Rules, in addressing potential transboundary environmental impacts of development projects and ensuring mutual benefits from the Mekong River and its resources. Since its adoption in 2022, the TbEIA Guidelines have already been voluntary implemented by Lao PDR on its underdevelopment Sekong A Downstream Hydropower Project, located on the Sekong River – a major tributary of the Mekong River. While this implementation is ongoing with results expected in early 2024, the implementation thus far has strengthened cooperation among concerned Member Countries with multiple successful open dialogues organized at national and regional levels. It is hope that lessons learnt from this application would lead to a wider application of the TbEIA Guidelines for future water resources development projects in the LMB.Keywords: transboundary, EIA, lower mekong river basin, mekong river
Procedia PDF Downloads 37784 The Association of Work Stress with Job Satisfaction and Occupational Burnout in Nurse Anesthetists
Authors: I. Ling Tsai, Shu Fen Wu, Chen-Fuh Lam, Chia Yu Chen, Shu Jiuan Chen, Yen Lin Liu
Abstract:
Purpose: Following the conduction of the National Health Insurance (NHI) system in Taiwan since 1995, the demand for anesthesia services continues to increase in the operating rooms and other medical units. It has been well recognized that increased work stress not only affects the clinical performance of the medical staff, long-term work load may also result in occupational burnout. Our study aimed to determine the influence of working environment, work stress and job satisfaction on the occupational burnout in nurse anesthetists. The ultimate goal of this research project is to develop a strategy in establishing a friendly, less stressful workplace for the nurse anesthetists to enhance their job satisfaction, thereby reducing occupational burnout and increasing the career life for nurse anesthetists. Methods: This was a cross-sectional, descriptive study performed in a metropolitan teaching hospital in southern Taiwan between May 2017 to July 2017. A structured self-administered questionnaire, modified from the Practice Environment Scale of the Nursing Work Index (PES-NWI), Occupational Stress Indicator 2 (OSI-2) and Maslach Burnout Inventory (MBI) manual was collected from the nurse anesthetists. The relationships between two numeric datasets were analyzed by the Pearson correlation test (SPSS 20.0). Results: A total of 66 completed questionnaires were collected from 75 nurses (response rate 88%). The average scores for the working environment, job satisfaction, and work stress were 69.6%, 61.5%, and 63.9%, respectively. The three perspectives used to assess the occupational burnout, namely emotional exhaustion, depersonalization and sense of personal accomplishment were 26.3, 13.0 and 24.5, suggesting the presence of moderate to high degrees of burnout in our nurse anesthetists. The presence of occupational burnout was closely correlated with the unsatisfactory working environment (r=-0.385, P=0.001) and reduced job satisfaction (r=-0.430, P=0.000). Junior nurse anesthetists (<1-year clinical experience) reported having higher satisfaction in working environment than the seniors (5 to 10-year clinical experience) (P=0.02). Although the average scores for work stress, job satisfaction, and occupational burnout were lower in junior nurses, the differences were not statistically different. The linear regression model, the working environment was the independent factor that predicted occupational burnout in nurse anesthetists up to 19.8%. Conclusions: High occupational burnout is more likely to develop in senior nurse anesthetists who experienced the dissatisfied working environment, work stress and lower job satisfaction. In addition to the regulation of clinical duties, the increased workload in the supervision of the junior nurse anesthetists may result in emotional stress and burnout in senior nurse anesthetists. Therefore, appropriate adjustment of clinical and teaching loading in the senior nurse anesthetists could be helpful to improve the occupational burnout and enhance the retention rate.Keywords: nurse anesthetists, working environment, work stress, job satisfaction, occupational burnout
Procedia PDF Downloads 278783 Threading Professionalism Through Occupational Therapy Curriculum: A Framework and Resources
Authors: Ashley Hobson, Ashley Efaw
Abstract:
Professionalism is an essential skill for clinicians, particularly for Occupational Therapy Providers (OTPs). The World Federation of Occupational Therapy (WFOT) Guiding Principles for Ethical Occupational Therapy and American Occupational Therapy Association (AOTA) Code of Ethics establishes expectations for professionalism among OTPs, emphasizing its importance in the field. However, the teaching and assessment of professionalism vary across OTP programs. The flexibility provided by the country standards allows programs to determine their own approaches to meeting these standards, resulting in inconsistency. Educators in both academic and fieldwork settings face challenges in objectively assessing and providing feedback on student professionalism. Although they observe instances of unprofessional behavior, there is no standardized assessment measure to evaluate professionalism in OTP students. While most students are committed to learning and applying professionalism skills, they enter OTP programs with varying levels of proficiency in this area. Consequently, they lack a uniform understanding of professionalism and lack an objective means to self-assess their current skills and identify areas for growth. It is crucial to explicitly teach professionalism, have students to self-assess their professionalism skills, and have OTP educators assess student professionalism. This approach is necessary for fostering students' professionalism journeys. Traditionally, there has been no objective way for students to self-assess their professionalism or for educators to provide objective assessments and feedback. To establish a uniform approach to professionalism, the authors incorporated professionalism content into our curriculum. Utilizing an operational definition of professionalism, the authors integrated professionalism into didactic, fieldwork, and capstone courses. The complexity of the content and the professionalism skills expected of students increase each year to ensure students graduate with the skills to practice in accordance with the WFOT Guiding Principles for Ethical Occupational Therapy Practice and AOTA Code of Ethics. Two professionalism assessments were developed based on the expectations outlined in the both documents. The Professionalism Self-Assessment allows students to evaluate their professionalism, reflect on their performance, and set goals. The Professionalism Assessment for Educators is a modified version of the same tool designed for educators. The purpose of this workshop is to provide educators with a framework and tools for assessing student professionalism. The authors discuss how to integrate professionalism content into OTP curriculum and utilize professionalism assessments to provide constructive feedback and equitable learning opportunities for OTP students in academic, fieldwork, and capstone settings. By adopting these strategies, educators can enhance the development of professionalism among OTP students, ensuring they are well-prepared to meet the demands of the profession.Keywords: professionalism, assessments, student learning, student preparedness, ethical practice
Procedia PDF Downloads 41782 Spatial Direct Numerical Simulation of Instability Waves in Hypersonic Boundary Layers
Authors: Jayahar Sivasubramanian
Abstract:
Understanding laminar-turbulent transition process in hyper-sonic boundary layers is crucial for designing viable high speed flight vehicles. The study of transition becomes particularly important in the high speed regime due to the effect of transition on aerodynamic performance and heat transfer. However, even after many years of research, the transition process in hyper-sonic boundary layers is still not understood. This lack of understanding of the physics of the transition process is a major impediment to the development of reliable transition prediction methods. Towards this end, spatial Direct Numerical Simulations are conducted to investigate the instability waves generated by a localized disturbance in a hyper-sonic flat plate boundary layer. In order to model a natural transition scenario, the boundary layer was forced by a short duration (localized) pulse through a hole on the surface of the flat plate. The pulse disturbance developed into a three-dimensional instability wave packet which consisted of a wide range of disturbance frequencies and wave numbers. First, the linear development of the wave packet was studied by forcing the flow with low amplitude (0.001% of the free-stream velocity). The dominant waves within the resulting wave packet were identified as two-dimensional second mode disturbance waves. Hence the wall-pressure disturbance spectrum exhibited a maximum at the span wise mode number k = 0. The spectrum broadened in downstream direction and the lower frequency first mode oblique waves were also identified in the spectrum. However, the peak amplitude remained at k = 0 which shifted to lower frequencies in the downstream direction. In order to investigate the nonlinear transition regime, the flow was forced with a higher amplitude disturbance (5% of the free-stream velocity). The developing wave packet grows linearly at first before reaching the nonlinear regime. The wall pressure disturbance spectrum confirmed that the wave packet developed linearly at first. The response of the flow to the high amplitude pulse disturbance indicated the presence of a fundamental resonance mechanism. Lower amplitude secondary peaks were also identified in the disturbance wave spectrum at approximately half the frequency of the high amplitude frequency band, which would be an indication of a sub-harmonic resonance mechanism. The disturbance spectrum indicates, however, that fundamental resonance is much stronger than sub-harmonic resonance.Keywords: boundary layer, DNS, hyper sonic flow, instability waves, wave packet
Procedia PDF Downloads 183781 Academic Goal Setting Practices of University Students in Lagos State, Nigeria: Implications for Counselling
Authors: Asikhia Olubusayo Aduke
Abstract:
Students’ inability to set data-based (specific, measurable, attainable, reliable, and time-bound) personal improvement goals threatens their academic success. Hence, the study aimed to investigate year-one students’ academic goal-setting practices at Lagos State University of Education, Nigeria. Descriptive survey research was used in carrying out this study. The study population consisted of 3,101 year-one students of the University. A sample size of five hundred (501) participants was selected through a proportional and simple random sampling technique. The Formative Goal Setting Questionnaire (FGSQ) developed by Research Collaboration (2015) was adapted and used as an instrument for the study. Two main research questions were answered, while two null hypotheses were formulated and tested for the study. The study revealed higher data-based goals for all students than personal improvement goals. Nevertheless, data-based and personal improvement goal-setting for female students was higher than for male students. One sample test statistic and Anova used to analyse data for the two hypotheses also revealed that the mean difference between male and female year one students’ data-based and personal improvement goal-setting formation was statistically significant (p < 0.05). This means year one students’ data-based and personal improvement goals showed significant gender differences. Based on the findings of this study, it was recommended, among others, that therapeutic techniques that can help to change students’ faulty thinking and challenge their lack of desire for personal improvement should be sought to treat students who have problems with setting high personal improvement goals. Counsellors also need to advocate continued research into how to increase the goal-setting ability of male students and should focus more on counselling male students’ goal-setting ability. The main contributions of the study are higher institutions must prioritize early intervention in first-year students' academic goal setting. Researching gender differences in this practice reveals a crucial insight: male students often lag behind in setting meaningful goals, impacting their motivation and performance. Focusing on this demographic with data-driven personal improvement goals can be transformative. By promoting goal setting that is specific, measurable, and focused on self-growth (rather than competition), male students can unlock their full potential. Researchers and counselors play a vital role in detecting and supporting students with lower goal-setting tendencies. By prioritizing this intervention, we can empower all students to set ambitious, personalized goals that ignite their passion for learning and pave the way for academic success.Keywords: academic goal setting, counselling, practice, university, year one students
Procedia PDF Downloads 61780 Modeling of Anode Catalyst against CO in Fuel Cell Using Material Informatics
Authors: M. Khorshed Alam, H. Takaba
Abstract:
The catalytic properties of metal usually change by intermixturing with another metal in polymer electrolyte fuel cells. Pt-Ru alloy is one of the much-talked used alloy to enhance the CO oxidation. In this work, we have investigated the CO coverage on the Pt2Ru3 nanoparticle with different atomic conformation of Pt and Ru using a combination of material informatics with computational chemistry. Density functional theory (DFT) calculations used to describe the adsorption strength of CO and H with different conformation of Pt Ru ratio in the Pt2Ru3 slab surface. Then through the Monte Carlo (MC) simulations we examined the segregation behaviour of Pt as a function of surface atom ratio, subsurface atom ratio, particle size of the Pt2Ru3 nanoparticle. We have constructed a regression equation so as to reproduce the results of DFT only from the structural descriptors. Descriptors were selected for the regression equation; xa-b indicates the number of bonds between targeted atom a and neighboring atom b in the same layer (a,b = Pt or Ru). Terms of xa-H2 and xa-CO represent the number of atoms a binding H2 and CO molecules, respectively. xa-S is the number of atom a on the surface. xa-b- is the number of bonds between atom a and neighboring atom b located outside the layer. The surface segregation in the alloying nanoparticles is influenced by their component elements, composition, crystal lattice, shape, size, nature of the adsorbents and its pressure, temperature etc. Simulations were performed on different size (2.0 nm, 3.0 nm) of nanoparticle that were mixing of Pt and Ru atoms in different conformation considering of temperature range 333K. In addition to the Pt2Ru3 alloy we also considered pure Pt and Ru nanoparticle to make comparison of surface coverage by adsorbates (H2, CO). Hence, we assumed the pure and Pt-Ru alloy nanoparticles have an fcc crystal structures as well as a cubo-octahedron shape, which is bounded by (111) and (100) facets. Simulations were performed up to 50 million MC steps. From the results of MC, in the presence of gases (H2, CO), the surfaces are occupied by the gas molecules. In the equilibrium structure the coverage of H and CO as a function of the nature of surface atoms. In the initial structure, the Pt/Ru ratios on the surfaces for different cluster sizes were in range of 0.50 - 0.95. MC simulation was employed when the partial pressure of H2 (PH2) and CO (PCO) were 70 kPa and 100-500 ppm, respectively. The Pt/Ru ratios decrease as the increase in the CO concentration, without little exception only for small nanoparticle. The adsorption strength of CO on the Ru site is higher than the Pt site that would be one of the reason for decreasing the Pt/Ru ratio on the surface. Therefore, our study identifies that controlling the nanoparticle size, composition, conformation of alloying atoms, concentration and chemical potential of adsorbates have impact on the steadiness of nanoparticle alloys which ultimately and also overall catalytic performance during the operations.Keywords: anode catalysts, fuel cells, material informatics, Monte Carlo
Procedia PDF Downloads 192779 Virtual Metering and Prediction of Heating, Ventilation, and Air Conditioning Systems Energy Consumption by Using Artificial Intelligence
Authors: Pooria Norouzi, Nicholas Tsang, Adam van der Goes, Joseph Yu, Douglas Zheng, Sirine Maleej
Abstract:
In this study, virtual meters will be designed and used for energy balance measurements of an air handling unit (AHU). The method aims to replace traditional physical sensors in heating, ventilation, and air conditioning (HVAC) systems with simulated virtual meters. Due to the inability to manage and monitor these systems, many HVAC systems have a high level of inefficiency and energy wastage. Virtual meters are implemented and applied in an actual HVAC system, and the result confirms the practicality of mathematical sensors for alternative energy measurement. While most residential buildings and offices are commonly not equipped with advanced sensors, adding, exploiting, and monitoring sensors and measurement devices in the existing systems can cost thousands of dollars. The first purpose of this study is to provide an energy consumption rate based on available sensors and without any physical energy meters. It proves the performance of virtual meters in HVAC systems as reliable measurement devices. To demonstrate this concept, mathematical models are created for AHU-07, located in building NE01 of the British Columbia Institute of Technology (BCIT) Burnaby campus. The models will be created and integrated with the system’s historical data and physical spot measurements. The actual measurements will be investigated to prove the models' accuracy. Based on preliminary analysis, the resulting mathematical models are successful in plotting energy consumption patterns, and it is concluded confidently that the results of the virtual meter will be close to the results that physical meters could achieve. In the second part of this study, the use of virtual meters is further assisted by artificial intelligence (AI) in the HVAC systems of building to improve energy management and efficiency. By the data mining approach, virtual meters’ data is recorded as historical data, and HVAC system energy consumption prediction is also implemented in order to harness great energy savings and manage the demand and supply chain effectively. Energy prediction can lead to energy-saving strategies and considerations that can open a window in predictive control in order to reach lower energy consumption. To solve these challenges, the energy prediction could optimize the HVAC system and automates energy consumption to capture savings. This study also investigates AI solutions possibility for autonomous HVAC efficiency that will allow quick and efficient response to energy consumption and cost spikes in the energy market.Keywords: virtual meters, HVAC, artificial intelligence, energy consumption prediction
Procedia PDF Downloads 104778 The Feasibility of Ratification of the United Nation Convention on Contracts for International Sale of Goods by Islamic Countries, Saudi Arabia as a Case
Authors: Ibrahim M. Alwehaibi
Abstract:
Recently the windows of globalization weirdly open, which increase the trade between the Western countries and Muslim nations. Sales of goods contracts are one of the most common business transaction in the world. This commercial exchange has faced many obstacles. One of the most concerned obstacles is the conflicts between laws. Thus, United Nation created a Convention on Contracts for the International Sale of Goods (CISG). Some of Islamic countries have ratified the CISG, while other Islamic countries have concerns about the feasibility of ratification of the CISG, and many businessmen have a concern of application of the convention. The concerns related to the conflict between CISG and Sharia, and the long debate about the success, ambiguity, and stability of the CISG. Therefore, this research will examine the feasibility of Muslim countries and Muslim businessmen to adopt the CISG by following steps: First, this research will introduce sharia Law (Islamic contracts law) and CISG and provide backgrounds of both laws. Second, this research will compare the provisions of CISG and Sharia and figuring out the conflicts and provide possible solutions for the conflicts. Third, this study will examine the advantages and disadvantages of adopting the CISG and examining the success of the CISG. Fourth, this study will explore the current situation in Islamic countries by taking Saudi Arabia as a case and explore how the application of Sharia law works and the possibility to enforce the CISG and explore the current practice of foreign Sales in Saudi Arabia. The research finds that there are some conflicts between CISG and Sharia Law. The most notable conflicts are interest and uncertainty in considerations. Also, this research finds that it seems that ratification of CISG is not beneficial for Muslim countries because the convention has not reached its goal which is uniformity of laws. Moreover, the CISG has been excluded and ignored by businessmen and some courts. Additionally, this research finds that it could be possible to enforce CISG in Saudi Arabia, provided that no conflict between the enforced provision and Sharia Law. This study is following the competitive and analysis methodologies to reach its findings. The researcher analyzes the provision of CISG and compares them with Sharia rules and finds the conflicts and compatibilities. In fact, CISG has 101 articles, so a comprehensive comparison of all articles in CISG with Sharia is difficult. Thus, in order to deeply analyze all aspects of this issue, this study will exclude some areas of contract which have been discussed by other researchers such as deliver of goods, conformity, and mirror image rules. The comparative section of this study will focus on the most concerned articles that conflict or doubtful of conflict with Sharia, which are interest, uncertainty, statute of limitation, specific performance, and pass of risk.Keywords: Sharia, CISG, Contracts for International Sale of Goods, contracts, sale of goods, Saudi Arabia
Procedia PDF Downloads 151777 Machine Learning Prediction of Diabetes Prevalence in the U.S. Using Demographic, Physical, and Lifestyle Indicators: A Study Based on NHANES 2009-2018
Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei
Abstract:
To develop a machine learning model to predict diabetes (DM) prevalence in the U.S. population using demographic characteristics, physical indicators, and lifestyle habits, and to analyze how these factors contribute to the likelihood of diabetes. We analyzed data from 23,546 participants aged 20 and older, who were non-pregnant, from the 2009-2018 National Health and Nutrition Examination Survey (NHANES). The dataset included key demographic (age, sex, ethnicity), physical (BMI, leg length, total cholesterol [TCHOL], fasting plasma glucose), and lifestyle indicators (smoking habits). A weighted sample was used to account for NHANES survey design features such as stratification and clustering. A classification machine learning model was trained to predict diabetes status. The target variable was binary (diabetes or non-diabetes) based on fasting plasma glucose measurements. The following models were evaluated: Logistic Regression (baseline), Random Forest Classifier, Gradient Boosting Machine (GBM), Support Vector Machine (SVM). Model performance was assessed using accuracy, F1-score, AUC-ROC, and precision-recall metrics. Feature importance was analyzed using SHAP values to interpret the contributions of variables such as age, BMI, ethnicity, and smoking status. The Gradient Boosting Machine (GBM) model outperformed other classifiers with an AUC-ROC score of 0.85. Feature importance analysis revealed the following key predictors: Age: The most significant predictor, with diabetes prevalence increasing with age, peaking around the 60s for males and 70s for females. BMI: Higher BMI was strongly associated with a higher risk of diabetes. Ethnicity: Black participants had the highest predicted prevalence of diabetes (14.6%), followed by Mexican-Americans (13.5%) and Whites (10.6%). TCHOL: Diabetics had lower total cholesterol levels, particularly among White participants (mean decline of 23.6 mg/dL). Smoking: Smoking showed a slight increase in diabetes risk among Whites (0.2%) but had a limited effect in other ethnic groups. Using machine learning models, we identified key demographic, physical, and lifestyle predictors of diabetes in the U.S. population. The results confirm that diabetes prevalence varies significantly across age, BMI, and ethnic groups, with lifestyle factors such as smoking contributing differently by ethnicity. These findings provide a basis for more targeted public health interventions and resource allocation for diabetes management.Keywords: diabetes, NHANES, random forest, gradient boosting machine, support vector machine
Procedia PDF Downloads 8776 Hybrid Precoder Design Based on Iterative Hard Thresholding Algorithm for Millimeter Wave Multiple-Input-Multiple-Output Systems
Authors: Ameni Mejri, Moufida Hajjaj, Salem Hasnaoui, Ridha Bouallegue
Abstract:
The technology advances have most lately made the millimeter wave (mmWave) communication possible. Due to the huge amount of spectrum that is available in MmWave frequency bands, this promising candidate is considered as a key technology for the deployment of 5G cellular networks. In order to enhance system capacity and achieve spectral efficiency, very large antenna arrays are employed at mmWave systems by exploiting array gain. However, it has been shown that conventional beamforming strategies are not suitable for mmWave hardware implementation. Therefore, new features are required for mmWave cellular applications. Unlike traditional multiple-input-multiple-output (MIMO) systems for which only digital precoders are essential to accomplish precoding, MIMO technology seems to be different at mmWave because of digital precoding limitations. Moreover, precoding implements a greater number of radio frequency (RF) chains supporting more signal mixers and analog-to-digital converters. As RF chain cost and power consumption is increasing, we need to resort to another alternative. Although the hybrid precoding architecture has been regarded as the best solution based on a combination between a baseband precoder and an RF precoder, we still do not get the optimal design of hybrid precoders. According to the mapping strategies from RF chains to the different antenna elements, there are two main categories of hybrid precoding architecture. Given as a hybrid precoding sub-array architecture, the partially-connected structure reduces hardware complexity by using a less number of phase shifters, whereas it sacrifices some beamforming gain. In this paper, we treat the hybrid precoder design in mmWave MIMO systems as a problem of matrix factorization. Thus, we adopt the alternating minimization principle in order to solve the design problem. Further, we present our proposed algorithm for the partially-connected structure, which is based on the iterative hard thresholding method. Through simulation results, we show that our hybrid precoding algorithm provides significant performance gains over existing algorithms. We also show that the proposed approach reduces significantly the computational complexity. Furthermore, valuable design insights are provided when we use the proposed algorithm to make simulation comparisons between the hybrid precoding partially-connected structure and the fully-connected structure.Keywords: alternating minimization, hybrid precoding, iterative hard thresholding, low-complexity, millimeter wave communication, partially-connected structure
Procedia PDF Downloads 321775 The Diagnostic Utility and Sensitivity of the Xpert® MTB/RIF Assay in Diagnosing Mycobacterium tuberculosis in Bone Marrow Aspirate Specimens
Authors: Nadhiya N. Subramony, Jenifer Vaughan, Lesley E. Scott
Abstract:
In South Africa, the World Health Organisation estimated 454000 new cases of Mycobacterium tuberculosis (M.tb) infection (MTB) in 2015. Disseminated tuberculosis arises from the haematogenous spread and seeding of the bacilli in extrapulmonary sites. The gold standard for the detection of MTB in bone marrow is TB culture which has an average turnaround time of 6 weeks. Histological examinations of trephine biopsies to diagnose MTB also have a time delay owing mainly to the 5-7 day processing period prior to microscopic examination. Adding to the diagnostic delay is the non-specific nature of granulomatous inflammation which is the hallmark of MTB involvement of the bone marrow. A Ziehl-Neelson stain (which highlights acid-fast bacilli) is therefore mandatory to confirm the diagnosis but can take up to 3 days for processing and evaluation. Owing to this delay in diagnosis, many patients are lost to follow up or remain untreated whilst results are awaited, thus encouraging the spread of undiagnosed TB. The Xpert® MTB/RIF (Cepheid, Sunnyvale, CA) is the molecular test used in the South African national TB program as the initial diagnostic test for pulmonary TB. This study investigates the optimisation and performance of the Xpert® MTB/RIF on bone marrow aspirate specimens (BMA), a first since the introduction of the assay in the diagnosis of extrapulmonary TB. BMA received for immunophenotypic analysis as part of the investigation into disseminated MTB or in the evaluation of cytopenias in immunocompromised patients were used. Processing BMA on the Xpert® MTB/RIF was optimised to ensure bone marrow in EDTA and heparin did not inhibit the PCR reaction. Inactivated M.tb was spiked into the clinical bone marrow specimen and distilled water (as a control). A volume of 500mcl and an incubation time of 15 minutes with sample reagent were investigated as the processing protocol. A total of 135 BMA specimens had sufficient residual volume for Xpert® MTB/RIF testing however 22 specimens (16.3%) were not included in the final statistical analysis as an adequate trephine biopsy and/or TB culture was not available. Xpert® MTB/RIF testing was not affected by BMA material in the presence of heparin or EDTA, but the overall detection of MTB in BMA was low compared to histology and culture. Sensitivity of the Xpert® MTB/RIF compared to both histology and culture was 8.7% (95% confidence interval (CI): 1.07-28.04%) and sensitivity compared to histology only was 11.1% (95% CI: 1.38-34.7%). Specificity of the Xpert® MTB/RIF was 98.9% (95% CI: 93.9-99.7%). Although the Xpert® MTB/RIF generates a faster result than histology and TB culture and is less expensive than culture and drug susceptibility testing, the low sensitivity of the Xpert® MTB/RIF precludes its use for the diagnosis of MTB in bone marrow aspirate specimens and warrants alternative/additional testing to optimise the assay.Keywords: bone marrow aspirate , extrapulmonary TB, low sensitivity, Xpert® MTB/RIF
Procedia PDF Downloads 171774 Enzymatic Hydrolysis of Sugar Cane Bagasse Using Recombinant Hemicellulases
Authors: Lorena C. Cintra, Izadora M. De Oliveira, Amanda G. Fernandes, Francieli Colussi, Rosália S. A. Jesuíno, Fabrícia P. Faria, Cirano J. Ulhoa
Abstract:
Xylan is the main component of hemicellulose and for its complete degradation is required cooperative action of a system consisting of several enzymes including endo-xylanases (XYN), β-xylosidases (XYL) and α-L-arabinofuranosidases (ABF). The recombinant hemicellulolytic enzymes an endoxylanase (HXYN2), β-xylosidase (HXYLA), and an α-L-arabinofuranosidase (ABF3) were used in hydrolysis tests. These three enzymes are produced by filamentous fungi and were expressed heterologously and produced in Pichia pastoris previously. The aim of this work was to evaluate the effect of recombinant hemicellulolytic enzymes on the enzymatic hydrolysis of sugarcane bagasse (SCB). The interaction between the three recombinant enzymes during SCB pre-treated by steam explosion hydrolysis was performed with different concentrations of HXYN2, HXYLA and ABF3 in different ratios in according to a central composite rotational design (CCRD) 23, including six axial points and six central points, totaling 20 assays. The influence of the factors was assessed by analyzing the main effects and interaction between the factors, calculated using Statistica 8.0 software (StatSoft Inc. Tulsa, OK, USA). The Pareto chart was constructed with this software and showed the values of the Student’s t test for each recombinant enzyme. It was considered as response variable the quantification of reducing sugars by DNS (mg/mL). The Pareto chart showed that the recombinant enzyme ABF3 exerted more significant effect during SCB hydrolysis, with higher concentrations and with the lowest concentration of this enzyme. It was performed analysis of variance according to Fisher method (ANOVA). In ANOVA for the release of reducing sugars (mg/ml) as the variable response, the concentration of ABF3 showed significance during hydrolysis SCB. The result obtained by ANOVA, is in accordance with those presented in the analysis method based on the statistical Student's t (Pareto chart). The degradation of the central chain of xylan by HXYN2 and HXYLA was more strongly influenced by ABF3 action. A model was obtained, and it describes the performance of the interaction of all three enzymes for the release of reducing sugars, and can be used to better explain the results of the statistical analysis. The formulation capable of releasing the higher levels of reducing sugars had the following concentrations: HXYN2 with 600 U/g of substrate, HXYLA with 11.5 U.g-1 and ABF3 with 0.32 U.g-1. In conclusion, the recombinant enzyme that has a more significant effect during SCB hydrolysis was ABF3. It is noteworthy that the xylan present in the SCB is arabinoglucoronoxylan, due to this fact debranching enzymes are important to allow access of enzymes that act on the central chain.Keywords: experimental design, hydrolysis, recombinant enzymes, sugar cane bagasse
Procedia PDF Downloads 229773 Self-Sensing Concrete Nanocomposites for Smart Structures
Authors: A. D'Alessandro, F. Ubertini, A. L. Materazzi
Abstract:
In the field of civil engineering, Structural Health Monitoring is a topic of growing interest. Effective monitoring instruments permit the control of the working conditions of structures and infrastructures, through the identification of behavioral anomalies due to incipient damages, especially in areas of high environmental hazards as earthquakes. While traditional sensors can be applied only in a limited number of points, providing a partial information for a structural diagnosis, novel transducers may allow a diffuse sensing. Thanks to the new tools and materials provided by nanotechnology, new types of multifunctional sensors are developing in the scientific panorama. In particular, cement-matrix composite materials capable of diagnosing their own state of strain and tension, could be originated by the addition of specific conductive nanofillers. Because of the nature of the material they are made of, these new cementitious nano-modified transducers can be inserted within the concrete elements, transforming the same structures in sets of widespread sensors. This paper is aimed at presenting the results of a research about a new self-sensing nanocomposite and about the implementation of smart sensors for Structural Health Monitoring. The developed nanocomposite has been obtained by inserting multi walled carbon nanotubes within a cementitious matrix. The insertion of such conductive carbon nanofillers provides the base material with piezoresistive characteristics and peculiar sensitivity to mechanical modifications. The self-sensing ability is achieved by correlating the variation of the external stress or strain with the variation of some electrical properties, such as the electrical resistance or conductivity. Through the measurement of such electrical characteristics, the performance and the working conditions of an element or a structure can be monitored. Among conductive carbon nanofillers, carbon nanotubes seem to be particularly promising for the realization of self-sensing cement-matrix materials. Some issues related to the nanofiller dispersion or to the influence of the nano-inclusions amount in the cement matrix need to be carefully investigated: the strain sensitivity of the resulting sensors is influenced by such factors. This work analyzes the dispersion of the carbon nanofillers, the physical properties of the fresh dough, the electrical properties of the hardened composites and the sensing properties of the realized sensors. The experimental campaign focuses specifically on their dynamic characterization and their applicability to the monitoring of full-scale elements. The results of the electromechanical tests with both slow varying and dynamic loads show that the developed nanocomposite sensors can be effectively used for the health monitoring of structures.Keywords: carbon nanotubes, self-sensing nanocomposites, smart cement-matrix sensors, structural health monitoring
Procedia PDF Downloads 227772 Field Synergy Analysis of Combustion Characteristics in the Afterburner of Solid Oxide Fuel Cell System
Authors: Shing-Cheng Chang, Cheng-Hao Yang, Wen-Sheng Chang, Chih-Chia Lin, Chun-Han Li
Abstract:
The solid oxide fuel cell (SOFC) is a promising green technology which can achieve a high electrical efficiency. Due to the high operating temperature of SOFC stack, the off-gases at high temperature from anode and cathode outlets are introduced into an afterburner to convert the chemical energy into thermal energy by combustion. The heat is recovered to preheat the fresh air and fuel gases before they pass through the stack during the SOFC power generation system operation. For an afterburner of the SOFC system, the temperature control with a good thermal uniformity is important. A burner with a well-designed geometry usually can achieve a satisfactory performance. To design an afterburner for an SOFC system, the computational fluid dynamics (CFD) simulation is adoptable. In this paper, the hydrogen combustion characteristics in an afterburner with simple geometry are studied by using CFD. The burner is constructed by a cylinder chamber with the configuration of a fuel gas inlet, an air inlet, and an exhaust outlet. The flow field and temperature distributions inside the afterburner under different fuel and air flow rates are analyzed. To improve the temperature uniformity of the afterburner during the SOFC system operation, the flow paths of anode/cathode off-gases are varied by changing the positions of fuels and air inlet channel to improve the heat and flow field synergy in the burner furnace. Because the air flow rate is much larger than the fuel gas, the flow structure and heat transfer in the afterburner is dominated by the air flow path. The present work studied the effects of fluid flow structures on the combustion characteristics of an SOFC afterburner by three simulation models with a cylindrical combustion chamber and a tapered outlet. All walls in the afterburner are assumed to be no-slip and adiabatic. In each case, two set of parameters are simulated to study the transport phenomena of hydrogen combustion. The equivalence ratios are in the range of 0.08 to 0.1. Finally, the pattern factor for the simulation cases is calculated to investigate the effect of gas inlet locations on the temperature uniformity of the SOFC afterburner. The results show that the temperature uniformity of the exhaust gas can be improved by simply adjusting the position of the gas inlet. The field synergy analysis indicates the design of the fluid flow paths should be in the way that can significantly contribute to the heat transfer, i.e. the field synergy angle should be as small as possible. In the study cases, the averaged synergy angle of the burner is about 85̊, 84̊, and 81̊ respectively.Keywords: afterburner, combustion, field synergy, solid oxide fuel cell
Procedia PDF Downloads 137771 Arguments against Innateness of Theory of Mind
Authors: Arkadiusz Gut, Robert Mirski
Abstract:
The nativist-constructivist debate constitutes a considerable part of current research on mindreading. Peter Carruthers and his colleagues are known for their nativist position in the debate and take issue with constructivist views proposed by other researchers, with Henry Wellman, Alison Gopnik, and Ian Apperly at the forefront. More specifically, Carruthers together with Evan Westra propose a nativistic explanation of Theory of Mind Scale study results that Wellman et al. see as supporting constructivism. While allowing for development of the innate mindreading system, Westra and Carruthers base their argumentation essentially on a competence-performance gap, claiming that cross-cultural differences in Theory of Mind Scale progression as well as discrepancies between infants’ and toddlers’ results on verbal and non-verbal false-belief tasks are fully explainable in terms of acquisition of other, pragmatic, cognitive developments, which are said to allow for an expression of the innately present Theory of Mind understanding. The goal of the present paper is to bring together arguments against the view offered by Westra and Carruthers. It will be shown that even though Carruthers et al.’s interpretation has not been directly controlled for in Wellman et al.’s experiments, there are serious reasons to dismiss such nativistic views which Carruthers et al. advance. The present paper discusses the following issues that undermine Carruthers et al.’s nativistic conception: (1) The concept of innateness is argued to be developmentally inaccurate; it has been dropped in many biological sciences altogether and many developmental psychologists advocate for doing the same in cognitive psychology. Reality of development is a complex interaction of changing elements that is belied by the simplistic notion of ‘the innate.’ (2) The purported innate mindreading conceptual system posited by Carruthers ascribes adult-like understanding to infants, ignoring the difference between first- and second-order understanding, between what can be called ‘presentation’ and ‘representation.’ (3) Advances in neurobiology speak strongly against any inborn conceptual knowledge; neocortex, where conceptual knowledge finds its correlates, is said to be largely equipotential at birth. (4) Carruthers et al.’s interpretations are excessively charitable; they extend results of studies done with 15-month-olds to conclusions about innateness, whereas in reality at that age there has been plenty of time for construction of the skill. (5) Looking-time experiment paradigm used in non-verbal false belief tasks that provide the main support for Carruthers’ argumentation has been criticized on methodological grounds. In the light of the presented arguments, nativism in theory of mind research is concluded to be an untenable position.Keywords: development, false belief, mindreading, nativism, theory of mind
Procedia PDF Downloads 210770 Structure Clustering for Milestoning Applications of Complex Conformational Transitions
Authors: Amani Tahat, Serdal Kirmizialtin
Abstract:
Trajectory fragment methods such as Markov State Models (MSM), Milestoning (MS) and Transition Path sampling are the prime choice of extending the timescale of all atom Molecular Dynamics simulations. In these approaches, a set of structures that covers the accessible phase space has to be chosen a priori using cluster analysis. Structural clustering serves to partition the conformational state into natural subgroups based on their similarity, an essential statistical methodology that is used for analyzing numerous sets of empirical data produced by Molecular Dynamics (MD) simulations. Local transition kernel among these clusters later used to connect the metastable states using a Markovian kinetic model in MSM and a non-Markovian model in MS. The choice of clustering approach in constructing such kernel is crucial since the high dimensionality of the biomolecular structures might easily confuse the identification of clusters when using the traditional hierarchical clustering methodology. Of particular interest, in the case of MS where the milestones are very close to each other, accurate determination of the milestone identity of the trajectory becomes a challenging issue. Throughout this work we present two cluster analysis methods applied to the cis–trans isomerism of dinucleotide AA. The choice of nucleic acids to commonly used proteins to study the cluster analysis is two fold: i) the energy landscape is rugged; hence transitions are more complex, enabling a more realistic model to study conformational transitions, ii) Nucleic acids conformational space is high dimensional. A diverse set of internal coordinates is necessary to describe the metastable states in nucleic acids, posing a challenge in studying the conformational transitions. Herein, we need improved clustering methods that accurately identify the AA structure in its metastable states in a robust way for a wide range of confused data conditions. The single linkage approach of the hierarchical clustering available in GROMACS MD-package is the first clustering methodology applied to our data. Self Organizing Map (SOM) neural network, that also known as a Kohonen network, is the second data clustering methodology. The performance comparison of the neural network as well as hierarchical clustering method is studied by means of computing the mean first passage times for the cis-trans conformational rates. Our hope is that this study provides insight into the complexities and need in determining the appropriate clustering algorithm for kinetic analysis. Our results can improve the effectiveness of decisions based on clustering confused empirical data in studying conformational transitions in biomolecules.Keywords: milestoning, self organizing map, single linkage, structure clustering
Procedia PDF Downloads 224769 Designing Metal Organic Frameworks for Sustainable CO₂ Utilization
Authors: Matthew E. Potter, Daniel J. Stewart, Lindsay M. Armstrong, Pier J. A. Sazio, Robert R. Raja
Abstract:
Rising CO₂ levels in the atmosphere means that CO₂ is a highly desirable feedstock. This requires specific catalysts to be designed to activate this inert molecule, combining a catalytic site tailored for CO₂ transformations with a support that can readily adsorb CO₂. Metal organic frameworks (MOFs) are regularly used as CO₂ sorbents. The organic nature of the linker molecules, connecting the metal nodes, offers many post-synthesis modifications to introduce catalytic active sites into the frameworks. However, the metal nodes may be coordinatively unsaturated, allowing them to bind to organic moieties. Imidazoles have shown promise catalyzing the formation of cyclic carbonates from epoxides with CO₂. Typically, this synthesis route employs toxic reagents such as phosgene, liberating HCl. Therefore an alternative route with CO₂ is highly appealing. In this work we design active sites for CO₂ activation, by tethering substituted-imidazole organocatalytic species to the available Cr3+ metal nodes of a Cr-MIL-101 MOF, for the first time, to create a tailored species for carbon capture utilization applications. Our tailored design strategy combining a CO₂ sorbent, Cr-MIL-101, with an anchored imidazole results in a highly active and selective multifunctional catalyst, achieving turnover frequencies of over 750 hr-1. These findings demonstrate the synergy between the MOF framework and imidazoles for CO₂ utilization applications. Further, the effect of substrate variation has been explored yielding mechanistic insights into this process. Through characterization, we show that the structural and compositional integrity of the Cr-MIL-101 has been preserved on functionalizing the imidazoles. Further, we show the binding of the imidazoles to the Cr3+ metal nodes. This can be seen through our EPR study, where the distortion of the Cr3+ on binding to the imidazole shows the CO₂ binding site is close to the active imidazole. This has a synergistic effect, improving catalytic performance. We believe the combination of MOF support and organocatalyst allows many possibilities to generate new multifunctional catalysts for CO₂ utilisation. In conclusion, we have validated our design procedure, combining a known CO₂ sorbent, with an active imidazole species to create a unique tailored multifunctional catalyst for CO₂ utilization. This species achieves high activity and selectivity for the formation of cyclic carbonates and offers a sustainable alternative to traditional synthesis methods. This work represents a unique design strategy for CO₂ utilization while offering exciting possibilities for further work in characterization, computational modelling, and post-synthesis modification.Keywords: carbonate, catalysis, MOF, utilisation
Procedia PDF Downloads 180768 Analysis of Correlation Between Manufacturing Parameters and Mechanical Strength Followed by Uncertainty Propagation of Geometric Defects in Lattice Structures
Authors: Chetra Mang, Ahmadali Tahmasebimoradi, Xavier Lorang
Abstract:
Lattice structures are widely used in various applications, especially in aeronautic, aerospace, and medical applications because of their high performance properties. Thanks to advancement of the additive manufacturing technology, the lattice structures can be manufactured by different methods such as laser beam melting technology. However, the presence of geometric defects in the lattice structures is inevitable due to the manufacturing process. The geometric defects may have high impact on the mechanical strength of the structures. This work analyzes the correlation between the manufacturing parameters and the mechanical strengths of the lattice structures. To do that, two types of the lattice structures; body-centered cubic with z-struts (BCCZ) structures made of Inconel718, and body-centered cubic (BCC) structures made of Scalmalloy, are manufactured by laser melting beam machine using Taguchi design of experiment. Each structure is placed on the substrate with a specific position and orientation regarding the roller direction of deposed metal powder. The position and orientation are considered as the manufacturing parameters. The geometric defects of each beam in the lattice are characterized and used to build the geometric model in order to perform simulations. Then, the mechanical strengths are defined by the homogeneous response as Young's modulus and yield strength. The distribution of mechanical strengths is observed as a function of manufacturing parameters. The mechanical response of the BCCZ structure is stretch-dominated, i.e., the mechanical strengths are directly dependent on the strengths of the vertical beams. As the geometric defects of vertical beams are slightly changed based on their position/orientation on the manufacturing substrate, the mechanical strengths are less dispersed. The manufacturing parameters are less influenced on the mechanical strengths of the structure BCCZ. The mechanical response of the BCC structure is bending-dominated. The geometric defects of inclined beam are highly dispersed within a structure and also based on their position/orientation on the manufacturing substrate. For different position/orientation on the substrate, the mechanical responses are highly dispersed as well. This shows that the mechanical strengths are directly impacted by manufacturing parameters. In addition, this work is carried out to study the uncertainty propagation of the geometric defects on the mechanical strength of the BCC lattice structure made of Scalmalloy. To do that, we observe the distribution of mechanical strengths of the lattice according to the distribution of the geometric defects. A probability density law is determined based on a statistical hypothesis corresponding to the geometric defects of the inclined beams. The samples of inclined beams are then randomly drawn from the density law to build the lattice structure samples. The lattice samples are then used for simulation to characterize the mechanical strengths. The results reveal that the distribution of mechanical strengths of the structures with the same manufacturing parameters is less dispersed than one of the structures with different manufacturing parameters. Nevertheless, the dispersion of mechanical strengths due to the structures with the same manufacturing parameters are unneglectable.Keywords: geometric defects, lattice structure, mechanical strength, uncertainty propagation
Procedia PDF Downloads 123767 A Quadratic Model to Early Predict the Blastocyst Stage with a Time Lapse Incubator
Authors: Cecile Edel, Sandrine Giscard D'Estaing, Elsa Labrune, Jacqueline Lornage, Mehdi Benchaib
Abstract:
Introduction: The use of incubator equipped with time-lapse technology in Artificial Reproductive Technology (ART) allows a continuous surveillance. With morphocinetic parameters, algorithms are available to predict the potential outcome of an embryo. However, the different proposed time-lapse algorithms do not take account the missing data, and then some embryos could not be classified. The aim of this work is to construct a predictive model even in the case of missing data. Materials and methods: Patients: A retrospective study was performed, in biology laboratory of reproduction at the hospital ‘Femme Mère Enfant’ (Lyon, France) between 1 May 2013 and 30 April 2015. Embryos (n= 557) obtained from couples (n=108) were cultured in a time-lapse incubator (Embryoscope®, Vitrolife, Goteborg, Sweden). Time-lapse incubator: The morphocinetic parameters obtained during the three first days of embryo life were used to build the predictive model. Predictive model: A quadratic regression was performed between the number of cells and time. N = a. T² + b. T + c. N: number of cells at T time (T in hours). The regression coefficients were calculated with Excel software (Microsoft, Redmond, WA, USA), a program with Visual Basic for Application (VBA) (Microsoft) was written for this purpose. The quadratic equation was used to find a value that allows to predict the blastocyst formation: the synthetize value. The area under the curve (AUC) obtained from the ROC curve was used to appreciate the performance of the regression coefficients and the synthetize value. A cut-off value has been calculated for each regression coefficient and for the synthetize value to obtain two groups where the difference of blastocyst formation rate according to the cut-off values was maximal. The data were analyzed with SPSS (IBM, Il, Chicago, USA). Results: Among the 557 embryos, 79.7% had reached the blastocyst stage. The synthetize value corresponds to the value calculated with time value equal to 99, the highest AUC was then obtained. The AUC for regression coefficient ‘a’ was 0.648 (p < 0.001), 0.363 (p < 0.001) for the regression coefficient ‘b’, 0.633 (p < 0.001) for the regression coefficient ‘c’, and 0.659 (p < 0.001) for the synthetize value. The results are presented as follow: blastocyst formation rate under cut-off value versus blastocyst rate formation above cut-off value. For the regression coefficient ‘a’ the optimum cut-off value was -1.14.10-3 (61.3% versus 84.3%, p < 0.001), 0.26 for the regression coefficient ‘b’ (83.9% versus 63.1%, p < 0.001), -4.4 for the regression coefficient ‘c’ (62.2% versus 83.1%, p < 0.001) and 8.89 for the synthetize value (58.6% versus 85.0%, p < 0.001). Conclusion: This quadratic regression allows to predict the outcome of an embryo even in case of missing data. Three regression coefficients and a synthetize value could represent the identity card of an embryo. ‘a’ regression coefficient represents the acceleration of cells division, ‘b’ regression coefficient represents the speed of cell division. We could hypothesize that ‘c’ regression coefficient could represent the intrinsic potential of an embryo. This intrinsic potential could be dependent from oocyte originating the embryo. These hypotheses should be confirmed by studies analyzing relationship between regression coefficients and ART parameters.Keywords: ART procedure, blastocyst formation, time-lapse incubator, quadratic model
Procedia PDF Downloads 306766 The Development of Traffic Devices Using Natural Rubber in Thailand
Authors: Weeradej Cheewapattananuwong, Keeree Srivichian, Godchamon Somchai, Wasin Phusanong, Nontawat Yoddamnern
Abstract:
Natural rubber used for traffic devices in Thailand has been developed and researched for several years. When compared with Dry Rubber Content (DRC), the quality of Rib Smoked Sheet (RSS) is better. However, the cost of admixtures, especially CaCO₃ and sulphur, is higher than the cost of RSS itself. In this research, Flexible Guideposts and Rubber Fender Barriers (RFB) are taken into consideration. In case of flexible guideposts, the materials used are both RSS and DRC60%, but for RFB, only RSS is used due to the controlled performance tests. The objective of flexible guideposts and RFB is to decrease a number of accidents, fatal rates, and serious injuries. Functions of both devices are to save road users and vehicles as well as to absorb impact forces from vehicles so as to decrease of serious road accidents. This leads to the mitigation methods to remedy the injury of motorists, form severity to moderate one. The solution is to find the best practice of traffic devices using natural rubber under the engineering concepts. In addition, the performances of materials, such as tensile strength and durability, are calculated for the modulus of elasticity and properties. In the laboratory, the simulation of crashes, finite element of materials, LRFD, and concrete technology methods are taken into account. After calculation, the trials' compositions of materials are mixed and tested in the laboratory. The tensile test, compressive test, and weathering or durability test are followed and based on ASTM. Furthermore, the Cycle-Repetition Test of Flexible Guideposts will be taken into consideration. The final decision is to fabricate all materials and have a real test section in the field. In RFB test, there will be 13 crash tests, 7 Pickup Truck tests, and 6 Motorcycle Tests. The test of vehicular crashes happens for the first time in Thailand, applying the trial and error methods; for example, the road crash test under the standard of NCHRP-TL3 (100 kph) is changed to the MASH 2016. This is owing to the fact that MASH 2016 is better than NCHRP in terms of speed, types, and weight of vehicles and the angle of crash. In the processes of MASH, Test Level 6 (TL-6), which is composed of 2,270 kg Pickup Truck, 100 kph, and 25 degree of crash-angle is selected. The final test for real crash will be done, and the whole system will be evaluated again in Korea. The researchers hope that the number of road accidents will decrease, and Thailand will be no more in the top tenth ranking of road accidents in the world.Keywords: LRFD, load and resistance factor design, ASTM, american society for testing and materials, NCHRP, national cooperation highway research program, MASH, manual for assessing safety hardware
Procedia PDF Downloads 128765 Criticality of Adiabatic Length for a Single Branch Pulsating Heat Pipe
Authors: Utsav Bhardwaj, Shyama Prasad Das
Abstract:
To meet the extensive requirements of thermal management of the circuit card assemblies (CCAs), satellites, PCBs, microprocessors, any other electronic circuitry, pulsating heat pipes (PHPs) have emerged in the recent past as one of the best solutions technically. But industrial application of PHPs is still unexplored up to a large extent due to their poor reliability. There are several systems as well as operational parameters which not only affect the performance of an operating PHP, but also decide whether the PHP can operate sustainably or not. Functioning may completely be halted for some particular combinations of the values of system and operational parameters. Among the system parameters, adiabatic length is one of the important ones. In the present work, a simplest single branch PHP system with an adiabatic section has been considered. It is assumed to have only one vapour bubble and one liquid plug. First, the system has been mathematically modeled using film evaporation/condensation model, followed by the steps of recognition of equilibrium zone, non-dimensionalization and linearization. Then proceeding with a periodical solution of the linearized and reduced differential equations, stability analysis has been performed. Slow and fast variables have been identified, and averaging approach has been used for the slow ones. Ultimately, temporal evolution of the PHP is predicted by numerically solving the averaged equations, to know whether the oscillations are likely to sustain/decay temporally. Stability threshold has also been determined in terms of some non-dimensional numbers formed by different groupings of system and operational parameters. A combined analytical and numerical approach has been used, and it has been found that for each combination of all other parameters, there exists a maximum length of the adiabatic section beyond which the PHP cannot function at all. This length has been called as “Critical Adiabatic Length (L_ac)”. For adiabatic lengths greater than “L_ac”, oscillations are found to be always decaying sooner or later. Dependence of “L_ac” on some other parameters has also been checked and correlated at certain evaporator & condenser section temperatures. “L_ac” has been found to be linearly increasing with increase in evaporator section length (L_e), whereas the condenser section length (L_c) has been found to have almost no effect on it upto a certain limit. But at considerably large condenser section lengths, “L_ac” is expected to decrease with increase in “L_c” due to increased wall friction. Rise in static pressure (p_r) exerted by the working fluid reservoir makes “L_ac” rise exponentially whereas it increases cubically with increase in the inner diameter (d) of PHP. Physics of all such variations has been given a good insight too. Thus, a methodology for quantification of the critical adiabatic length for any possible set of all other parameters of PHP has been established.Keywords: critical adiabatic length, evaporation/condensation, pulsating heat pipe (PHP), thermal management
Procedia PDF Downloads 226764 A Comparative Case Study of Institutional Work in Public Sector Organizations: Creating Knowledge Management Practice
Authors: Dyah Adi Sriwahyuni
Abstract:
Institutional work has become a prominent and contemporary institutional theory perspective in organization studies. A wealth of studies in organizations have explored actor activities in creating, maintaining, and disrupting institutions at the field level. However, the exploration of the work of actors in creating new management practices at the organizational level has been somewhat limited. The current institutional work literature mostly describes the work of actors at the field level and ignores organizational actors who work to realize management practices. Organizational actors here are defined as actors in organizations who work to institutionalize a particular management practice within the organizations. The extant literature has also generalized the types of management practices, which meant overlooking the unique characteristics of each management fashion as well as a management practice. To fill these gaps, this study aims to provide empirical evidence so as to contribute theoretically to institutional work through a comparative case study on organizational actors’ creation of knowledge management (KM) practice in two public sector organizations in Indonesia. KM is a contemporary management practice employed to manage individual and organizational knowledge in order to improve organizational performance. This practice presents a suitable practical setting with which to provide a rich understanding of the organizational actors’ institutional work and their connection with technology. Drawing on and extending the work of Perkmann and Spicer (2008), this study explores the forms of institutional work performed by organizational actors, including their motivation, skills, challenges, and opportunities. The primary data collection is semi-structured interviews with knowledgeable actors and document analysis for validity and triangulation. Following Eisenhardt's cross-case patterns, the researcher analyzed the collected data focusing on within-group similarities and intergroup differences. The researcher coded interview data using NVivo and used documents to corroborate the findings. The study’s findings add to the wealth of institutional theory literature in organization studies, particularly institutional work related to management practices. This study builds a theory about the work of organizational actors in creating knowledge management practices. Using the perspective of institutional work, research can show the roles of the various actors involved, their practices, and their relationship to technology (materiality), not only focusing on actors with a power which has been the theorizing of institutional entrepreneurship. The development of knowledge management practices in the Indonesian public sector is also a significant additional contribution, given that the current KM literature is dominated by conceptualizing the KM framework and the impact of KM on organizations. The public sector, which is the research setting, also provides important lessons on how actors in a highly institutionalized context are creating an institution, in this case, a knowledge management practice.Keywords: institutional work, knowledge management, case study, public sector organizations
Procedia PDF Downloads 118763 Feeding Value Improvement of Rice Straw Fermented by Spent Mushroom Substrate on Growth and Lactating Performance of Dairy Goat
Authors: G. J. Fan, T. T. Lee, M. H. Chen, T. F. Shiao, B. Yu, C. F. Lee
Abstract:
Rice straw with poor feed quality and spent mushroom substrate are both the most abundant agricultural residues in Taiwan. Edible mushrooms from white rot fungi possess lignocellulase activity. It was expected to improve the feeding value of rice straw for ruminant by solid-state fermentation pretreatment using spent mushroom substrate. Six varieties or subspecies of spent edible mushrooms (Pleurotus ostreatus (blue or white color), P. sajor-caju, P. citrinopileatus, P. eryngii and Ganoderma lucidium) substrate were evaluated in solid-state fermentation process with rice straw for 8 wks. Quality improvement of fermented rice straw was determined by its in vitro digestibility, lignocellulose degradability, and cell wall breakdown checked by scanning electron microscope. Results turned out that Pleurotus ostreatus (white color) and P. sajor-caju had the better lignocellulose degradation effect than the others and was chosen for advance in vivo study. Rice straw fermented with spent Pleurotus ostreatus or Pleurotus sajor-caju mushroom substrate 8 wks was prepared for growing and lactating feeding trials of dairy goat, respectively. Pangolagrass hay at 15% diet dry matter was the control diet. Fermented or original rice straw was added to substitute pangolagrass hay in both feeding trials. A total of 30 head of Alpine castrated ram were assigned into three groups for 11 weeks, 5 pens (2 head/pen) each group. A total of 21 head of Saanen and Alpine goats were assigned into three treatments and individually fed in two repeat lactating trials with 28-d each. In castrated ram study, results showed that fermented rice straw by spent Pleurotus ostreatus mushroom substrate attributed the higher daily dry matter intakes (DMI, 1.53 vs. 1.20 kg) and body weight gain (138 vs. 101 g) than goats fed original rice straw. DMI (2.25 vs. 1.81 kg) and milk yield (3.31 vs. 3.02 kg) of lactating goats fed control pangolagrass diet and fermented rice straw by spent Pleurotus sajor-caju mushroom substrate were also higher than those fed original rice straw diet (P < 0.05). Milk compositions, milk fat, protein, total solid and lactose, were similar among treatments. In conclusion, solid-state fermentation by spent Pleurotus ostreatus or Pleurotus sajor-caju mushroom substrate could effectively improve the feeding value of rice straw. Fermented rice straw is a good alternative fiber feed resource for growing and lactating dairy goats and 15% in diet dry matter is recommended.Keywords: feeding value, fermented rice straw, growing and lactating dairy goat, spent Pleurotus ostreatus and Pleurotus sajor-caju mushroom substrate
Procedia PDF Downloads 174762 Environmental Performance of Different Lab Scale Chromium Removal Processes
Authors: Chiao-Cheng Huang, Pei-Te Chiueh, Ya-Hsuan Liou
Abstract:
Chromium-contaminated wastewater from electroplating industrial activity has been a long-standing environmental issue, as it can degrade surface water quality and is harmful to soil ecosystems. The traditional method of treating chromium-contaminated wastewater has been to use chemical coagulation processes. However, this method consumes large amounts of chemicals such as sulfuric acid, sodium hydroxide, and sodium bicarbonate in order to remove chromium. However, a series of new methods for treating chromium-containing wastewater have been developed. This study aimed to compare the environmental impact of four different lab scale chromium removal processes: 1.) chemical coagulation process (the most common and traditional method), in which sodium metabisulfite was used as reductant, 2.) electrochemical process using two steel sheets as electrodes, 3.) reduction by iron-copper bimetallic powder, and 4.) photocatalysis process by TiO2. Each process was run in the lab, and was able to achieve 100% removal of chromium in solution. Then a Life Cycle Assessment (LCA) study was conducted based on the experimental data obtained from four different case studies to identify the environmentally preferable alternative to treat chromium wastewater. The model used for calculating the environmental impact was TRACi, and the system scope includes the production phase and use phase of chemicals and electricity consumed by the chromium removal processes, as well as the final disposal of chromium containing sludge. The functional unit chosen in this study was the removal of 1 mg of chromium. Solution volume of each case study was adjusted to 1 L in advance and the chemicals and energy consumed were proportionally adjusted. The emissions and resources consumed were identified and characterized into 15 categories of midpoint impacts. The impact assessment results show that the human ecotoxicity category accounts for 55 % of environmental impact in Case 1, which can be attributed to the sulfuric acid used for pH adjustment. In Case 2, production of steel sheet electrodes is an energy-intensive process, thus contributed to 20 % of environmental impact. In Case 3, sodium bicarbonate is used as an anti-corrosion additive, which results mainly in 1.02E-05 Comparative Toxicity Unit (CTU) in the human toxicity category and 0.54E-05 (CTU) in acidification of air. In Case 4, electricity consumption for power supply of UV lamp gives 5.25E-05 (CTU) in human toxicity category, 1.15E-05 (kg Neq) in eutrophication. In conclusion, Case 3 and Case 4 have higher environmental impacts than Case 1 and Case 2, which can be attributed mostly to higher energy and chemical consumption, leading to high impacts in the global warming and ecotoxicity categories.Keywords: chromium, lab scale, life cycle assessment, wastewater
Procedia PDF Downloads 265761 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction
Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong
Abstract:
Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.Keywords: data refinement, machine learning, mutual information, short-term latency prediction
Procedia PDF Downloads 169