Search results for: BIM application
668 Viability Analysis of a Centralized Hydrogen Generation Plant for Use in Oil Refining Industry
Authors: C. Fúnez Guerra, B. Nieto Calderón, M. Jaén Caparrós, L. Reyes-Bozo, A. Godoy-Faúndez, E. Vyhmeister
Abstract:
The global energy system is experiencing a change of scenery. Unstable energy markets, an increasing focus on climate change and its sustainable development is forcing businesses to pursue new solutions in order to ensure future economic growth. This has led to the interest in using hydrogen as an energy carrier in transportation and industrial applications. As an energy carrier, hydrogen is accessible and holds a high gravimetric energy density. Abundant in hydrocarbons, hydrogen can play an important role in the shift towards low-emission fossil value chains. By combining hydrogen production by natural gas reforming with carbon capture and storage, the overall CO2 emissions are significantly reduced. In addition, the flexibility of hydrogen as an energy storage makes it applicable as a stabilizer in the renewable energy mix. The recent development in hydrogen fuel cells is also raising the expectations for a hydrogen powered transportation sector. Hydrogen value chains exist to a large extent in the industry today. The global hydrogen consumption was approximately 50 million tonnes (7.2 EJ) in 2013, where refineries, ammonia, methanol production and metal processing were main consumers. Natural gas reforming produced 48% of this hydrogen, but without carbon capture and storage (CCS). The total emissions from the production reached 500 million tonnes of CO2, hence alternative production methods with lower emissions will be necessary in future value chains. Hydrogen from electrolysis is used for a wide range of industrial chemical reactions for many years. Possibly, the earliest use was for the production of ammonia-based fertilisers by Norsk Hydro, with a test reactor set up in Notodden, Norway, in 1927. This application also claims one of the world’s largest electrolyser installations, at Sable Chemicals in Zimbabwe. Its array of 28 electrolysers consumes 80 MW per hour, producing around 21,000 Nm3/h of hydrogen. These electrolysers can compete if cheap sources of electricity are available and natural gas for steam reforming is relatively expensive. Because electrolysis of water produces oxygen as a by-product, a system of Autothermal Reforming (ATR) utilizing this oxygen has been analyzed. Replacing the air separation unit with electrolysers produces the required amount of oxygen to the ATR as well as additional hydrogen. The aim of this paper is to evaluate the technical and economic potential of large-scale production of hydrogen for oil refining industry. Sensitivity analysis of parameters such as investment costs, plant operating hours, electricity price and sale price of hydrogen and oxygen are performed.Keywords: autothermal reforming, electrolyser, hydrogen, natural gas, steam methane reforming
Procedia PDF Downloads 211667 Nonlinear Optics of Dirac Fermion Systems
Authors: Vipin Kumar, Girish S. Setlur
Abstract:
Graphene has been recognized as a promising 2D material with many new properties. However, pristine graphene is gapless which hinders its direct application towards graphene-based semiconducting devices. Graphene is a zero-gapp and linearly dispersing semiconductor. Massless charge carriers (quasi-particles) in graphene obey the relativistic Dirac equation. These Dirac fermions show very unusual physical properties such as electronic, optical and transport. Graphene is analogous to two-level atomic systems and conventional semiconductors. We may expect that graphene-based systems will also exhibit phenomena that are well-known in two-level atomic systems and in conventional semiconductors. Rabi oscillation is a nonlinear optical phenomenon well-known in the context of two-level atomic systems and also in conventional semiconductors. It is the periodic exchange of energy between the system of interest and the electromagnetic field. The present work describes the phenomenon of Rabi oscillations in graphene based systems. Rabi oscillations have already been described theoretically and experimentally in the extensive literature available on this topic. To describe Rabi oscillations they use an approximation known as rotating wave approximation (RWA) well-known in studies of two-level systems. RWA is valid only near conventional resonance (small detuning)- when the frequency of the external field is nearly equal to the particle-hole excitation frequency. The Rabi frequency goes through a minimum close to conventional resonance as a function of detuning. Far from conventional resonance, the RWA becomes rather less useful and we need some other technique to describe the phenomenon of Rabi oscillation. In conventional systems, there is no second minimum - the only minimum is at conventional resonance. But in graphene we find anomalous Rabi oscillations far from conventional resonance where the Rabi frequency goes through a minimum that is much smaller than the conventional Rabi frequency. This is known as anomalous Rabi frequency and is unique to graphene systems. We have shown that this is attributable to the pseudo-spin degree of freedom in graphene systems. A new technique, which is an alternative to RWA called asymptotic RWA (ARWA), has been invoked by our group to discuss the phenomenon of Rabi oscillation. Experimentally accessible current density shows different types of threshold behaviour in frequency domain close to the anomalous Rabi frequency depending on the system chosen. For single layer graphene, the exponent at threshold is equal to 1/2 while in case of bilayer graphene, it is computed to be equal to 1. Bilayer graphene shows harmonic (anomalous) resonances absent in single layer graphene. The effect of asymmetry and trigonal warping (a weak direct inter-layer hopping in bilayer graphene) on these oscillations is also studied in graphene systems. Asymmetry has a remarkable effect only on anomalous Rabi oscillations whereas the Rabi frequency near conventional resonance is not significantly affected by the asymmetry parameter. In presence of asymmetry, these graphene systems show Rabi-like oscillations (offset oscillations) even for vanishingly small applied field strengths (less than the gap parameter). The frequency of offset oscillations may be identified with the asymmetry parameter.Keywords: graphene, Bilayer graphene, Rabi oscillations, Dirac fermion systems
Procedia PDF Downloads 298666 Jurisdiction Conflicts in Contracts of International Maritime Transport: The Application of the Forum Selection Clause in Brazilian Courts
Authors: Renan Caseiro De Almeida, Mateus Mello Garrute
Abstract:
The world walks to be ever more globalised. This trend promotes an increase on the number of transnational commercial transactions. The main modal for carriage of goods is by sea, and many countries have their economies dependent on the maritime freightage – it could be because they exercise largely this activity or because they follow the tendency of using the maritime logistic widely. Among these ones, Brazil is included. This nation counts with sixteen ports with good capacities, which receive most of the international income by sea. It is estimated that 85 per cent of the total influx of goods in Brazil is by maritime modal, leaving mere 15 per cent for the other ones. This made it necessary to develop maritime law in international and national basis, to create a standard to be applied with the intention to harmonize the transnational carriage of goods by sea. Maritime contracts are very specific and have interesting peculiarities, but in their range, little research has been made on what causes the main divergences when it comes to international contracts: the jurisdiction conflict. Likewise any other international contract, it is common for the parties to set a forum selection clause to choose the forum which will be able to judge the litigations that could rise from a maritime transport contract and, consequently, also which law should be applied to the cases. However, the forum choice in Brazil has always been somewhat polemical – not only in the maritime law sphere - for sometimes national tribunals overlook the parties’ choice and call the competence for themselves. In this sense, it is interesting to mention that the Mexico Convention of 1994 about the law applicable to international contracts did not gain strength in Brazil, nor even reached the Congress to be considered for ratification. Furthermore, it is also noteworthy that Brazil has a new Civil Procedure Code, which was put into reinforcement in 2016 bringing new legal provisions specifically about the forum selection. This represented a mark in the national legal system in this matter. Therefore, this paper intends to give an insight through Brazilian jurisprudence, making an analysis of how this issue has been treated on litigations about maritime contracts in the national tribunals, as well as the solutions found by the Brazilian legal system for the jurisdiction conflicts in those cases. To achieve the expected results, the hypothetical-deductive method will be used in combination with researches on doctrine and legislations. Also, jurisprudential research and case law study will have a special role, since the main point of this paper is to verify and study the position of the courts in Brazil in a specific matter. As a country of civil law, the Brazilian judges and tribunals are very attached to the rules displayed on codes. However, the jurisprudential understanding has been changing during the years and with the advent of the new rules about the applicable law and forum selection clause, it is noticeable that new winds are being blown.Keywords: applicable law, forum selection clause, international business, international maritime contracts, litigation in courts
Procedia PDF Downloads 274665 An Inquiry into the Usage of Complex Systems Models to Examine the Effects of the Agent Interaction in a Political Economic Environment
Authors: Ujjwall Sai Sunder Uppuluri
Abstract:
Group theory is a powerful tool that researchers can use to provide a structural foundation for their Agent Based Models. These Agent Based models are argued by this paper to be the future of the Social Science Disciplines. More specifically, researchers can use them to apply evolutionary theory to the study of complex social systems. This paper illustrates one such example of how theoretically an Agent Based Model can be formulated from the application of Group Theory, Systems Dynamics, and Evolutionary Biology to analyze the strategies pursued by states to mitigate risk and maximize usage of resources to achieve the objective of economic growth. This example can be applied to other social phenomena and this makes group theory so useful to the analysis of complex systems, because the theory provides the mathematical formulaic proof for validating the complex system models that researchers build and this will be discussed by the paper. The aim of this research, is to also provide researchers with a framework that can be used to model political entities such as states on a 3-dimensional plane. The x-axis representing resources (tangible and intangible) available to them, y the risks, and z the objective. There also exist other states with different constraints pursuing different strategies to climb the mountain. This mountain’s environment is made up of risks the state faces and resource endowments. This mountain is also layered in the sense that it has multiple peaks that must be overcome to reach the tallest peak. A state that sticks to a single strategy or pursues a strategy that is not conducive to the climbing of that specific peak it has reached is not able to continue advancement. To overcome the obstacle in the state’s path, it must innovate. Based on the definition of a group, we can categorize each state as being its own group. Each state is a closed system, one which is made up of micro level agents who have their own vectors and pursue strategies (actions) to achieve some sub objectives. The state also has an identity, the inverse being anarchy and/or inaction. Finally, the agents making up a state interact with each other through competition and collaboration to mitigate risks and achieve sub objectives that fall within the primary objective. Thus, researchers can categorize the state as an organism that reflects the sum of the output of the interactions pursued by agents at the micro level. When states compete, they employ a strategy and that state which has the better strategy (reflected by the strategies pursued by her parts) is able to out-compete her counterpart to acquire some resource, mitigate some risk or fulfil some objective. This paper will attempt to illustrate how group theory combined with evolutionary theory and systems dynamics can allow researchers to model the long run development, evolution, and growth of political entities through the use of a bottom up approach.Keywords: complex systems, evolutionary theory, group theory, international political economy
Procedia PDF Downloads 139664 Characterization of the MOSkin Dosimeter for Accumulated Dose Assessment in Computed Tomography
Authors: Lenon M. Pereira, Helen J. Khoury, Marcos E. A. Andrade, Dean L. Cutajar, Vinicius S. M. Barros, Anatoly B. Rozenfeld
Abstract:
With the increase of beam widths and the advent of multiple-slice and helical scanners, concerns related to the current dose measurement protocols and instrumentation in computed tomography (CT) have arisen. The current methodology of dose evaluation, which is based on the measurement of the integral of a single slice dose profile using a 100 mm long cylinder ionization chamber (Ca,100 and CPPMA, 100), has been shown to be inadequate for wide beams as it does not collect enough of the scatter-tails to make an accurate measurement. In addition, a long ionization chamber does not offer a good representation of the dose profile when tube current modulation is used. An alternative approach has been suggested by translating smaller detectors through the beam plane and assessing the accumulated dose trough the integral of the dose profile, which can be done for any arbitrary length in phantoms or in the air. For this purpose, a MOSFET dosimeter of small dosimetric volume was used. One of its recently designed versions is known as the MOSkin, which is developed by the Centre for Medical Radiation Physics at the University of Wollongong, and measures the radiation dose at a water equivalent depth of 0.07 mm, allowing the evaluation of skin dose when placed at the surface, or internal point doses when placed within a phantom. Thus, the aim of this research was to characterize the response of the MOSkin dosimeter for X-ray CT beams and to evaluate its application for the accumulated dose assessment. Initially, tests using an industrial x-ray unit were carried out at the Laboratory of Ionization Radiation Metrology (LMRI) of Federal University of Pernambuco, in order to investigate the sensitivity, energy dependence, angular dependence, and reproducibility of the dose response for the device for the standard radiation qualities RQT 8, RQT 9 and RQT 10. Finally, the MOSkin was used for the accumulated dose evaluation of scans using a Philips Brilliance 6 CT unit, with comparisons made between the CPPMA,100 value assessed with a pencil ionization chamber (PTW Freiburg TW 30009). Both dosimeters were placed in the center of a PMMA head phantom (diameter of 16 cm) and exposed in the axial mode with collimation of 9 mm, 250 mAs and 120 kV. The results have shown that the MOSkin response was linear with doses in the CT range and reproducible (98.52%). The sensitivity for a single MOSkin in mV/cGy was as follows: 9.208, 7.691 and 6.723 for the RQT 8, RQT 9 and RQT 10 beams qualities respectively. The energy dependence varied up to a factor of ±1.19 among those energies and angular dependence was not greater than 7.78% within the angle range from 0 to 90 degrees. The accumulated dose and the CPMMA, 100 value were 3,97 and 3,79 cGy respectively, which were statistically equivalent within the 95% confidence level. The MOSkin was shown to be a good alternative for CT dose profile measurements and more than adequate to provide accumulated dose assessments for CT procedures.Keywords: computed tomography dosimetry, MOSFET, MOSkin, semiconductor dosimetry
Procedia PDF Downloads 311663 Reflective Portfolio to Bridge the Gap in Clinical Training
Authors: Keenoo Bibi Sumera, Alsheikh Mona, Mubarak Jan Beebee Zeba Mahetaab
Abstract:
Background: Due to the busy schedule of the practicing clinicians at the hospitals, students may not always be attended to, which is to their detriment. The clinicians at the hospitals are also not always acquainted with teaching and/or supervising students on their placements. Additionally, there is a high student-patient ratio. Since they are the prospective clinical doctors under training, they need to reach the competence levels in clinical decision-making skills to be able to serve the healthcare system of the country and to be safe doctors. Aims and Objectives: A reflective portfolio was used to provide a means for students to learn by reflecting on their experiences and obtaining continuous feedback. This practice is an attempt to compensate for the scarcity of lack of resources, that is, clinical placement supervisors and patients. It is also anticipated that it will provide learners with a continuous monitoring and learning gap analysis tool for their clinical skills. Methodology: A hardcopy reflective portfolio was designed and validated. The portfolio incorporated a mini clinical evaluation exercise (mini-CEX), direct observation of procedural skills and reflection sections. Workshops were organized for the stakeholders, that is the management, faculty and students, separately. The rationale of reflection was emphasized. Students were given samples of reflective writing. The portfolio was then implemented amongst the undergraduate medical students of years four, five and six during clinical clerkship. After 16 weeks of implementation of the portfolio, a survey questionnaire was introduced to explore how undergraduate students perceive the educational value of the reflective portfolio and its impact on their deep information processing. Results: The majority of the respondents are in MD Year 5. Out of 52 respondents, 57.7% were doing the internal medicine clinical placement rotation, and 42.3% were in Otorhinolaryngology clinical placement rotation. The respondents believe that the implementation of a reflective portfolio helped them identify their weaknesses, gain professional development in terms of helping them to identify areas where the knowledge is good, increase the learning value if it is used as a formative assessment, try to relate to different courses and in improving their professional skills. However, it is not necessary that the portfolio will improve the self-esteem of respondents or help in developing their critical thinking, The portfolio takes time to complete, and the supervisors are not useful. They had to chase supervisors for feedback. 53.8% of the respondents followed the Gibbs reflective model to write the reflection, whilst the others did not follow any guidelines to write the reflection 48.1% said that the feedback was helpful, 17.3% preferred the use of written feedback, whilst 11.5% preferred oral feedback. Most of them suggested more frequent feedback. 59.6% of respondents found the current portfolio user-friendly, and 28.8% thought it was too bulky. 27.5% have mentioned that for a mobile application. Conclusion: The reflective portfolio, through the reflection of their work and regular feedback from supervisors, has an overall positive impact on the learning process of undergraduate medical students during their clinical clerkship.Keywords: Portfolio, Reflection, Feedback, Clinical Placement, Undergraduate Medical Education
Procedia PDF Downloads 86662 Curriculum Transformation: Multidisciplinary Perspectives on ‘Decolonisation’ and ‘Africanisation’ of the Curriculum in South Africa’s Higher Education
Authors: Andre Bechuke
Abstract:
The years of 2015-2017 witnessed a huge campaign, and in some instances, violent protests in South Africa by students and some groups of academics advocating the decolonisation of the curriculum of universities. These protests have forced through high expectations for universities to teach a curriculum relevant to the country, and the continent as well as enabled South Africa to participate in the globalised world. To realise this purpose, most universities are currently undertaking steps to transform and decolonise their curriculum. However, the transformation process is challenged and delayed by lack of a collective understanding of the concepts ‘decolonisation’ and ‘africanisation’ that should guide its application. Even more challenging is lack of a contextual understanding of these concepts across different university disciplines. Against this background, and underpinned in a qualitative research paradigm, the perspectives of these concepts as applied by different university disciplines were examined in order to understand and establish their implementation in the curriculum transformation agenda. Data were collected by reviewing the teaching and learning plans of 8 faculties of an institution of higher learning in South Africa and analysed through content and textual analysis. The findings revealed varied understanding and use of these concepts in the transformation of the curriculum across faculties. Decolonisation, according to the faculties of Law and Humanities, is perceived as the eradication of the Eurocentric positioning in curriculum content and the constitutive rules and norms that control thinking. This is not done by ignoring other knowledge traditions but does call for an affirmation and validation of African views of the world and systems of thought, mixing it with current knowledge. For the Faculty of Natural and Agricultural Sciences, decolonisation is seen as making the content of the curriculum relevant to students, fulfilling the needs of industry and equipping students for job opportunities. This means the use of teaching strategies and methods that are inclusive of students from diverse cultures, and to structure the learning experience in ways that are not alien to the cultures of the students. For the Health Sciences, decolonisation of the curriculum refers to the need for a shift in Western thinking towards being more sensitive to all cultural beliefs and thoughts. Collectively, decolonisation of education thus entails that a nation must become independent with regard to the acquisition of knowledge, skills, values, beliefs, and habits. Based on the findings, for universities to successfully transform their curriculum and integrate the concepts of decolonisation and Africanisation, there is a need to contextually determine the meaning of the concepts generally and narrow them down to what they should mean to specific disciplines. Universities should refrain from considering an umbrella approach to these concepts. Decolonisation should be seen as a means and not an end. A decolonised curriculum should equally be developed based on the finest knowledge skills, values, beliefs and habits around the world and not limited to one country or continent.Keywords: Africanisation, curriculum, transformation, decolonisation, multidisciplinary perspectives, South Africa’s higher education
Procedia PDF Downloads 161661 Extraction of Rice Bran Protein Using Enzymes and Polysaccharide Precipitation
Authors: Sudarat Jiamyangyuen, Tipawan Thongsook, Riantong Singanusong, Chanida Saengtubtim
Abstract:
Rice is a staple food as well as exported commodity of Thailand. Rice bran, a 10.5% constituent of rice grain, is a by-product of rice milling process. Rice bran is normally used as a raw material for rice bran oil production or sold as feed with a low price. Therefore, this study aimed to increase value of defatted rice bran as obtained after extracting of rice bran oil. Conventionally, the protein in defatted rice bran was extracted using alkaline extraction and acid precipitation, which results in reduction of nutritious components in rice bran. Rice bran protein concentrate is suitable for those who are allergenic of protein from other sources eg. milk, wheat. In addition to its hypoallergenic property, rice bran protein also contains good quantity of lysine. Thus it may act as a suitable ingredient for infant food formulations while adding variety to the restricted diets of children with food allergies. The objectives of this study were to compare properties of rice bran protein concentrate (RBPC) extracted from defatted rice bran using enzymes together with precipitation step using polysaccharides (alginate and carrageenan) to those of a control sample extracted using a conventional method. The results showed that extraction of protein from rice bran using enzymes exhibited the higher protein recovery compared to that extraction with alkaline. The extraction conditions using alcalase 2% (v/w) at 50 C, pH 9.5 gave the highest protein (2.44%) and yield (32.09%) in extracted solution compared to other enzymes. Rice bran protein concentrate powder prepared by a precipitation step using alginate (protein in solution: alginate 1:0.006) exhibited the highest protein (27.55%) and yield (6.62%). Precipitation using alginate was better than that of acid. RBPC extracted with alkaline (ALK) or enzyme alcalase (ALC), then precipitated with alginate (AL) (samples RBP-ALK-AL and RBP-ALC-AL) yielded the precipitation rate of 75% and 91.30%, respectively. Therefore, protein precipitation using alginate was then selected. Amino acid profile of control sample, and sample precipitated with alginate, as compared to casein and soy protein isolated, showed that control sample showed the highest content among all sample. Functional property study of RBP showed that the highest nitrogen solubility occurred in pH 8-10. There was no statically significant between emulsion capacity and emulsion stability of control and sample precipitated by alginate. However, control sample showed a higher of foaming and lower foam stability compared to those of sample precipitated with alginate. The finding was successful in terms of minimizing chemicals used in extraction and precipitation steps in preparation of rice bran protein concentrate. This research involves in a production of value-added product in which the double amount of protein (28%) compared to original amount (14%) contained in rice bran could be beneficial in terms of adding to food products eg. healthy drink with high protein and fiber. In addition, the basic knowledge of functional property of rice bran protein concentrate was obtained, which can be used to appropriately select the application of this value-added product from rice bran.Keywords: alginate, carrageenan, rice bran, rice bran protein
Procedia PDF Downloads 295660 Effect of Maturation on the Characteristics and Physicochemical Properties of Banana and Its Starch
Authors: Chien-Chun Huang, P. W. Yuan
Abstract:
Banana is one of the important fruits which constitute a valuable source of energy, vitamins and minerals and an important food component throughout the world. The fruit ripening and maturity standards vary from country to country depending on the expected shelf life of market. During ripening there are changes in appearance, texture and chemical composition of banana. The changes of component of banana during ethylene-induced ripening are categorized as nutritive values and commercial utilization. The objectives of this study were to investigate the changes of chemical composition and physicochemical properties of banana during ethylene-induced ripening. Green bananas were harvested and ripened by ethylene gas at low temperature (15℃) for seven stages. At each stage, banana was sliced and freeze-dried for banana flour preparation. The changes of total starch, resistant starch, chemical compositions, physicochemical properties, activity of amylase, polyphenolic oxidase (PPO) and phenylalanine ammonia lyase (PAL) of banana were analyzed each stage during ripening. The banana starch was isolated and analyzed for gelatinization properties, pasting properties and microscopic appearance each stage of ripening. The results indicated that the highest total starch and resistant starch content of green banana were 76.2% and 34.6%, respectively at the harvest stage. Both total starch and resistant starch content were significantly declined to 25.3% and 8.8%, respectively at the seventh stage. Soluble sugars content of banana increased from 1.21% at harvest stage to 37.72% at seventh stage during ethylene-induced ripening. Swelling power of banana flour decreased with the progress of ripening stage, but solubility increased. These results strongly related with the decreases of starch content of banana flour during ethylene-induced ripening. Both water insoluble and alcohol insoluble solids of banana flour decreased with the progress of ripening stage. Both activity of PPO and PAL increased, but the total free phenolics content decreased, with the increases of ripening stages. As ripening stage extended, the gelatinization enthalpy of banana starch significantly decreased from 15.31 J/g at the harvest stage to 10.55 J/g at the seventh stage. The peak viscosity and setback increased with the progress of ripening stages in the pasting properties of banana starch. The highest final viscosity, 5701 RVU, of banana starch slurry was found at the seventh stage. The scanning electron micrograph of banana starch showed the shapes of banana starch appeared to be round and elongated forms, ranging in 10-50 μm at the harvest stage. As the banana closed to ripe status, some parallel striations were observed on the surface of banana starch granular which could be caused by enzyme reaction during ripening. These results inferred that the highest resistant starch was found in the green banana could be considered as a potential application of healthy foods. The changes of chemical composition and physicochemical properties of banana could be caused by the hydrolysis of enzymes during the ethylene-induced ripening treatment.Keywords: maturation of banana, appearance, texture, soluble sugars, resistant starch, enzyme activities, physicochemical properties of banana starch
Procedia PDF Downloads 318659 Machine Learning for Disease Prediction Using Symptoms and X-Ray Images
Authors: Ravija Gunawardana, Banuka Athuraliya
Abstract:
Machine learning has emerged as a powerful tool for disease diagnosis and prediction. The use of machine learning algorithms has the potential to improve the accuracy of disease prediction, thereby enabling medical professionals to provide more effective and personalized treatments. This study focuses on developing a machine-learning model for disease prediction using symptoms and X-ray images. The importance of this study lies in its potential to assist medical professionals in accurately diagnosing diseases, thereby improving patient outcomes. Respiratory diseases are a significant cause of morbidity and mortality worldwide, and chest X-rays are commonly used in the diagnosis of these diseases. However, accurately interpreting X-ray images requires significant expertise and can be time-consuming, making it difficult to diagnose respiratory diseases in a timely manner. By incorporating machine learning algorithms, we can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The study utilized the Mask R-CNN algorithm, which is a state-of-the-art method for object detection and segmentation in images, to process chest X-ray images. The model was trained and tested on a large dataset of patient information, which included both symptom data and X-ray images. The performance of the model was evaluated using a range of metrics, including accuracy, precision, recall, and F1-score. The results showed that the model achieved an accuracy rate of over 90%, indicating that it was able to accurately detect and segment regions of interest in the X-ray images. In addition to X-ray images, the study also incorporated symptoms as input data for disease prediction. The study used three different classifiers, namely Random Forest, K-Nearest Neighbor and Support Vector Machine, to predict diseases based on symptoms. These classifiers were trained and tested using the same dataset of patient information as the X-ray model. The results showed promising accuracy rates for predicting diseases using symptoms, with the ensemble learning techniques significantly improving the accuracy of disease prediction. The study's findings indicate that the use of machine learning algorithms can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The model developed in this study has the potential to assist medical professionals in diagnosing respiratory diseases more accurately and efficiently. However, it is important to note that the accuracy of the model can be affected by several factors, including the quality of the X-ray images, the size of the dataset used for training, and the complexity of the disease being diagnosed. In conclusion, the study demonstrated the potential of machine learning algorithms for disease prediction using symptoms and X-ray images. The use of these algorithms can improve the accuracy of disease diagnosis, ultimately leading to better patient care. Further research is needed to validate the model's accuracy and effectiveness in a clinical setting and to expand its application to other diseases.Keywords: K-nearest neighbor, mask R-CNN, random forest, support vector machine
Procedia PDF Downloads 154658 The Four Elements of Zoroastrianism and Sustainable Ecosystems with an Ecological Approach
Authors: Esmat Momeni, Shabnam Basari, Mohammad Beheshtinia
Abstract:
The purpose of this study is to provide a symbolic explanation of the four elements in Zoroastrianism and sustainable ecosystems with an ecological approach. The research method is fundamental and deductive content analysis. Data collection has been done through library and documentary methods and through reading books and related articles. The population and sample of the present study are Yazd city and Iran country after discovering symbolic concepts derived from the theoretical foundations of Zoroastrianism in four elements of water, air, soil, fire and conformity with Iranian architecture with the ecological approach in Yazd city, the sustainable ecosystem it is explained by the system of nature. The validity and reliability of the results are based on the trust and confidence of the research literature. Research findings show that Yazd was one of the bases of Zoroastrianism in Iran. Many believe that the first person to discuss the elements of nature and respect Zoroastrians is the Prophet of this religion. Keeping the environment clean and pure by paying attention to and respecting these four elements. The water element is a symbol of existence in Zoroastrianism, so the people of Yazd used the aqueduct and designed a pool in front of the building. The soil element is a symbol of the raw material of human creation in the Zoroastrian religion, the most readily available material in the desert areas of Yazd, used as bricks and adobes, creating one of the most magnificent roof coverings is the dome. The wind element represents the invisible force of the soul in Creation in Zoroastrianism, the most important application of wind in the windy, which is a highly efficient cooling system. The element of fire, which is always a symbol of purity in Zoroastrianism, is located in a special place in Yazd's Ataskadeh (altar/ temple), where the most important religious prayers are held in and against the fire. Consequently, indigenous knowledge and attention to indigenous architecture is a part of the national capital of each nation that encompasses their beliefs, values, methods, and knowledge. According to studies on the four elements of Zoroastrianism, the link between these four elements are that due to the hot and dry fire at the beginning, it is the fire that begins to follow the nature of the movement in the stillness of the earth, and arises from the heat of the fire and because of vigor and its decreases, cold (wind) emerges, and from cold, humidity and wetness. And by examining books and resources on Yazd's architectural design with an ecological approach to the values of the four elements Zoroastrianism has been inspired, it can be concluded that in order to have environmentally friendly architecture, it is essential to use sustainable architectural principles, to link religious and sacrament culture and ecology through architecture.Keywords: ecology, architecture, quadruple elements of air, soil, water, fire, Zoroastrian religion, sustainable ecosystem, Iran, Yazd city
Procedia PDF Downloads 116657 Dry Reforming of Methane Using Metal Supported and Core Shell Based Catalyst
Authors: Vinu Viswanath, Lawrence Dsouza, Ugo Ravon
Abstract:
Syngas typically and intermediary gas product has a wide range of application of producing various chemical products, such as mixed alcohols, hydrogen, ammonia, Fischer-Tropsch products methanol, ethanol, aldehydes, alcohols, etc. There are several technologies available for the syngas production. An alternative to the conventional processes an attractive route of utilizing carbon dioxide and methane in equimolar ratio to generate syngas of ratio close to one has been developed which is also termed as Dry Reforming of Methane technology. It also gives the privilege to utilize the greenhouse gases like CO2 and CH4. The dry reforming process is highly endothermic, and indeed, ΔG becomes negative if the temperature is higher than 900K and practically, the reaction occurs at 1000-1100K. At this temperature, the sintering of the metal particle is happening that deactivate the catalyst. However, by using this strategy, the methane is just partially oxidized, and some cokes deposition occurs that causing the catalyst deactivation. The current research work was focused to mitigate the main challenges of dry reforming process such coke deposition, and metal sintering at high temperature.To achieve these objectives, we employed three different strategies of catalyst development. 1) Use of bulk catalysts such as olivine and pyrochlore type materials. 2) Use of metal doped support materials, like spinel and clay type material. 3) Use of core-shell model catalyst. In this approach, a thin layer (shell) of redox metal oxide is deposited over the MgAl2O4 /Al2O3 based support material (core). For the core-shell approach, an active metal is been deposited on the surface of the shell. The shell structure formed is a doped metal oxide that can undergo reduction and oxidation reactions (redox), and the core is an alkaline earth aluminate having a high affinity towards carbon dioxide. In the case of metal-doped support catalyst, the enhanced redox properties of doped CeO2 oxide and CO2 affinity property of alkaline earth aluminates collectively helps to overcome coke formation. For all of the mentioned three strategies, a systematic screening of the metals is carried out to optimize the efficiency of the catalyst. To evaluate the performance of them, the activity and stability test were carried out under reaction conditions of temperature ranging from 650 to 850 ̊C and an operating pressure ranging from 1 to 20 bar. The result generated infers that the core-shell model catalyst showed high activity and better stable DR catalysts under atmospheric as well as high-pressure conditions. In this presentation, we will show the results related to the strategy.Keywords: carbon dioxide, dry reforming, supports, core shell catalyst
Procedia PDF Downloads 177656 Cellular Targeting to Dual Gaseous Microenvironments by Polydimethylsiloxane Microchip
Authors: Samineh Barmaki, Ville Jokinen, Esko Kankuri
Abstract:
We report a microfluidic chip that can be used to modify the gaseous microenvironment of a cell-culture in ambient atmospheric conditions. The aim of the study is to show the cellular response to nitric oxide (NO) under hypoxic (oxygen < 5%) condition. Simultaneously targeting to hypoxic and nitric oxide will provide an opportunity for NO‑based therapeutics. Studies on cellular responses to lowered oxygen concentration or to gaseous mediators are usually carried out under a specific macro environment, such as hypoxia chambers, or with specific NO donor molecules that may have additional toxic effects. In our study, the chip consists of a microfluidic layer and a cell culture well, separated by a thin gas permeable polydimethylsiloxane (PDMS) membrane. The main design goal is to separate the gas oxygen scavenger and NO donor solutions, which are often toxic, from the cell media. Two different types of gas exchangers, titled 'pool' and 'meander' were tested. We find that the pool design allows us to reach a higher level of oxygen depletion than meander (24.32 ± 19.82 %vs -3.21 ± 8.81). Our microchip design can make the cells culture more simple and makes it easy to adapt existing cell culture protocols. Our first application is utilizing the chip to create hypoxic conditions on targeted areas of cell culture. In this study, oxygen scavenger sodium sulfite generates hypoxia and its effect on human embryonic kidney cells (HEK-293). The PDMS membrane was coated with fibronectin before initiating cell cultures, and the cells were grown for 48h on the chips before initiating the gas control experiments. The hypoxia experiments were performed by pumping of O₂-depleted H₂O into the microfluidic channel with a flow-rate of 0.5 ml/h. Image-iT® reagent as an oxygen level responser was mixed with HEK-293 cells. The fluorescent signal appears on cells stained with Image-iT® hypoxia reagent (after 6h of pumping oxygen-depleted H₂O through the microfluidic channel in pool area). The exposure to different levels of O₂ can be controlled by varying the thickness of the PDMS membrane. Recently, we improved the design of the microfluidic chip, which can control the microenvironment of two different gases at the same time. The hypoxic response was also improved from the new design of microchip. The cells were grown on the thin PDMS membrane for 30 hours, and with a flowrate of 0.1 ml/h; the oxygen scavenger was pumped into the microfluidic channel. We also show that by pumping sodium nitroprusside (SNP) as a nitric oxide donor activated under light and can generate nitric oxide on top of PDMS membrane. We are aiming to show cellular microenvironment response of HEK-293 cells to both nitric oxide (by pumping SNP) and hypoxia (by pumping oxygen scavenger solution) in separated channels in one microfluidic chip.Keywords: hypoxia, nitric oxide, microenvironment, microfluidic chip, sodium nitroprusside, SNP
Procedia PDF Downloads 134655 Process Safety Management Digitalization via SHEQTool based on Occupational Safety and Health Administration and Center for Chemical Process Safety, a Case Study in Petrochemical Companies
Authors: Saeed Nazari, Masoom Nazari, Ali Hejazi, Siamak Sanoobari Ghazi Jahani, Mohammad Dehghani, Javad Vakili
Abstract:
More than ever, digitization is an imperative for businesses to keep their competitive advantages, foster innovation and reduce paperwork. To design and successfully implement digital transformation initiatives within process safety management system, employees need to be equipped with the right tool, frameworks, and best practices. we developed a unique full stack application so-called SHEQTool which is entirely dynamic based on our extensive expertise, experience, and client feedback to help business processes particularly operations safety management. We use our best knowledge and scientific methodologies published by CCPS and OSHA Guidelines to streamline operations and integrated them into task management within Petrochemical Companies. We digitalize their main process safety management system elements and their sub elements such as hazard identification and risk management, training and communication, inspection and audit, critical changes management, contractor management, permit to work, pre-start-up safety review, incident reporting and investigation, emergency response plan, personal protective equipment, occupational health, and action management in a fully customizable manner with no programming needs for users. We review the feedback from main actors within petrochemical plant which highlights improving their business performance and productivity as well as keep tracking their functions’ key performance indicators (KPIs) because it; 1) saves time, resources, and costs of all paperwork on our businesses (by Digitalization); 2) reduces errors and improve performance within management system by covering most of daily software needs of the organization and reduce complexity and associated costs of numerous tools and their required training (One Tool Approach); 3) focuses on management systems and integrate functions and put them into traceable task management (RASCI and Flowcharting); 4) helps the entire enterprise be resilient to any change of your processes, technologies, assets with minimum costs (through Organizational Resilience); 5) reduces significantly incidents and errors via world class safety management programs and elements (by Simplification); 6) gives the companies a systematic, traceable, risk based, process based, and science based integrated management system (via proper Methodologies); 7) helps business processes complies with ISO 9001, ISO 14001, ISO 45001, ISO 31000, best practices as well as legal regulations by PDCA approach (Compliance).Keywords: process, safety, digitalization, management, risk, incident, SHEQTool, OSHA, CCPS
Procedia PDF Downloads 66654 X-Ray Detector Technology Optimization In CT Imaging
Authors: Aziz Ikhlef
Abstract:
Most of multi-slices CT scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80kVp and 140kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts
Procedia PDF Downloads 271653 Evaluation of Gesture-Based Password: User Behavioral Features Using Machine Learning Algorithms
Authors: Lakshmidevi Sreeramareddy, Komalpreet Kaur, Nane Pothier
Abstract:
Graphical-based passwords have existed for decades. Their major advantage is that they are easier to remember than an alphanumeric password. However, their disadvantage (especially recognition-based passwords) is the smaller password space, making them more vulnerable to brute force attacks. Graphical passwords are also highly susceptible to the shoulder-surfing effect. The gesture-based password method that we developed is a grid-free, template-free method. In this study, we evaluated the gesture-based passwords for usability and vulnerability. The results of the study are significant. We developed a gesture-based password application for data collection. Two modes of data collection were used: Creation mode and Replication mode. In creation mode (Session 1), users were asked to create six different passwords and reenter each password five times. In replication mode, users saw a password image created by some other user for a fixed duration of time. Three different duration timers, such as 5 seconds (Session 2), 10 seconds (Session 3), and 15 seconds (Session 4), were used to mimic the shoulder-surfing attack. After the timer expired, the password image was removed, and users were asked to replicate the password. There were 74, 57, 50, and 44 users participated in Session 1, Session 2, Session 3, and Session 4 respectfully. In this study, the machine learning algorithms have been applied to determine whether the person is a genuine user or an imposter based on the password entered. Five different machine learning algorithms were deployed to compare the performance in user authentication: namely, Decision Trees, Linear Discriminant Analysis, Naive Bayes Classifier, Support Vector Machines (SVMs) with Gaussian Radial Basis Kernel function, and K-Nearest Neighbor. Gesture-based password features vary from one entry to the next. It is difficult to distinguish between a creator and an intruder for authentication. For each password entered by the user, four features were extracted: password score, password length, password speed, and password size. All four features were normalized before being fed to a classifier. Three different classifiers were trained using data from all four sessions. Classifiers A, B, and C were trained and tested using data from the password creation session and the password replication with a timer of 5 seconds, 10 seconds, and 15 seconds, respectively. The classification accuracies for Classifier A using five ML algorithms are 72.5%, 71.3%, 71.9%, 74.4%, and 72.9%, respectively. The classification accuracies for Classifier B using five ML algorithms are 69.7%, 67.9%, 70.2%, 73.8%, and 71.2%, respectively. The classification accuracies for Classifier C using five ML algorithms are 68.1%, 64.9%, 68.4%, 71.5%, and 69.8%, respectively. SVMs with Gaussian Radial Basis Kernel outperform other ML algorithms for gesture-based password authentication. Results confirm that the shorter the duration of the shoulder-surfing attack, the higher the authentication accuracy. In conclusion, behavioral features extracted from the gesture-based passwords lead to less vulnerable user authentication.Keywords: authentication, gesture-based passwords, machine learning algorithms, shoulder-surfing attacks, usability
Procedia PDF Downloads 107652 Rainfall and Flood Forecast Models for Better Flood Relief Plan of the Mae Sot Municipality
Authors: S. Chuenchooklin, S. Taweepong, U. Pangnakorn
Abstract:
This research was conducted in the Mae Sot Watershed whereas located in the Moei River Basin at the Upper Salween River Basin in Tak Province, Thailand. The Mae Sot Municipality is the largest urbanized in Tak Province and situated in the midstream of the Mae Sot Watershed. It usually faces flash flood problem after heavy rain due to poor flood management has been reported since economic rapidly bloom up in recently years. Its catchment can be classified as ungauged basin with lack of rainfall data and no any stream gaging station was reported. It was attached by most severely flood event in 2013 as the worst studied case for those all communities in this municipality. Moreover, other problems are also faced in this watershed such shortage water supply for domestic consumption and agriculture utilizations including deterioration of water quality and landslide as well. The research aimed to increase capability building and strengthening the participation of those local community leaders and related agencies to conduct better water management in urban area was started by mean of the data collection and illustration of appropriated application of some short period rainfall forecasting model as the aim for better flood relief plan and management through the hydrologic model system and river analysis system programs. The authors intended to apply the global rainfall data via the integrated data viewer (IDV) program from the Unidata with the aim for rainfall forecasting in short period of 7 - 10 days in advance during rainy season instead of real time record. The IDV product can be present in advance period of rainfall with time step of 3 - 6 hours was introduced to the communities. The result can be used to input to either the hydrologic modeling system model (HEC-HMS) or the soil water assessment tool model (SWAT) for synthesizing flood hydrographs and use for flood forecasting as well. The authors applied the river analysis system model (HEC-RAS) to present flood flow behaviors in the reach of the Mae Sot stream via the downtown of the Mae Sot City as flood extents as water surface level at every cross-sectional profiles of the stream. Both models of HMS and RAS were tested in 2013 with observed rainfall and inflow-outflow data from the Mae Sot Dam. The result of HMS showed fit to the observed data at dam and applied at upstream boundary discharge to RAS in order to simulate flood extents and tested in the field, and the result found satisfied. The result of IDV’s rainfall forecast data was compared to observed data and found fair. However, it is an appropriate tool to use in the ungauged catchment to use with flood hydrograph and river analysis models for future efficient flood relief plan and management.Keywords: global rainfall, flood forecast, hydrologic modeling system, river analysis system
Procedia PDF Downloads 349651 Students’ Speech Anxiety in Blended Learning
Authors: Mary Jane B. Suarez
Abstract:
Public speaking anxiety (PSA), also known as speech anxiety, is innumerably persistent in any traditional communication classes, especially for students who learn English as a second language. The speech anxiety intensifies when communication skills assessments have taken their toll in an online or a remote mode of learning due to the perils of the COVID-19 virus. Both teachers and students have experienced vast ambiguity on how to realize a still effective way to teach and learn speaking skills amidst the pandemic. Communication skills assessments like public speaking, oral presentations, and student reporting have defined their new meaning using Google Meet, Zoom, and other online platforms. Though using such technologies has paved for more creative ways for students to acquire and develop communication skills, the effectiveness of using such assessment tools stands in question. This mixed method study aimed to determine the factors that affected the public speaking skills of students in a communication class, to probe on the assessment gaps in assessing speaking skills of students attending online classes vis-à-vis the implementation of remote and blended modalities of learning, and to recommend ways on how to address the public speaking anxieties of students in performing a speaking task online and to bridge the assessment gaps based on the outcome of the study in order to achieve a smooth segue from online to on-ground instructions maneuvering towards a much better post-pandemic academic milieu. Using a convergent parallel design, both quantitative and qualitative data were reconciled by probing on the public speaking anxiety of students and the potential assessment gaps encountered in an online English communication class under remote and blended learning. There were four phases in applying the convergent parallel design. The first phase was the data collection, where both quantitative and qualitative data were collected using document reviews and focus group discussions. The second phase was data analysis, where quantitative data was treated using statistical testing, particularly frequency, percentage, and mean by using Microsoft Excel application and IBM Statistical Package for Social Sciences (SPSS) version 19, and qualitative data was examined using thematic analysis. The third phase was the merging of data analysis results to amalgamate varying comparisons between desired learning competencies versus the actual learning competencies of students. Finally, the fourth phase was the interpretation of merged data that led to the findings that there was a significantly high percentage of students' public speaking anxiety whenever students would deliver speaking tasks online. There were also assessment gaps identified by comparing the desired learning competencies of the formative and alternative assessments implemented and the actual speaking performances of students that showed evidence that public speaking anxiety of students was not properly identified and processed.Keywords: blended learning, communication skills assessment, public speaking anxiety, speech anxiety
Procedia PDF Downloads 102650 Investigating the Algorithm to Maintain a Constant Speed in the Wankel Engine
Authors: Adam Majczak, Michał Bialy, Zbigniew Czyż, Zdzislaw Kaminski
Abstract:
Increasingly stringent emission standards for passenger cars require us to find alternative drives. The share of electric vehicles in the sale of new cars increases every year. However, their performance and, above all, range cannot be today successfully compared to those of cars with a traditional internal combustion engine. Battery recharging lasts hours, which can be hardly accepted due to the time needed to refill a fuel tank. Therefore, the ways to reduce the adverse features of cars equipped with electric motors only are searched for. One of the methods is a combination of an electric engine as a main source of power and a small internal combustion engine as an electricity generator. This type of drive enables an electric vehicle to achieve a radically increased range and low emissions of toxic substances. For several years, the leading automotive manufacturers like the Mazda and the Audi together with the best companies in the automotive industry, e.g., AVL have developed some electric drive systems capable of recharging themselves while driving, known as a range extender. An electricity generator is powered by a Wankel engine that has seemed to pass into history. This low weight and small engine with a rotating piston and a very low vibration level turned out to be an excellent source in such applications. Its operation as an energy source for a generator almost entirely eliminates its disadvantages like high fuel consumption, high emission of toxic substances, or short lifetime typical of its traditional application. The operation of the engine at a constant rotational speed enables a significant increase in its lifetime, and its small external dimensions enable us to make compact modules to drive even small urban cars like the Audi A1 or the Mazda 2. The algorithm to maintain a constant speed was investigated on the engine dynamometer with an eddy current brake and the necessary measuring apparatus. The research object was the Aixro XR50 rotary engine with the electronic power supply developed at the Lublin University of Technology. The load torque of the engine was altered during the research by means of the eddy current brake capable of giving any number of load cycles. The parameters recorded included speed and torque as well as a position of a throttle in an inlet system. Increasing and decreasing load did not significantly change engine speed, which means that control algorithm parameters are correctly selected. This work has been financed by the Polish Ministry of Science and Higher Education.Keywords: electric vehicle, power generator, range extender, Wankel engine
Procedia PDF Downloads 157649 Crop Breeding for Low Input Farming Systems and Appropriate Breeding Strategies
Authors: Baye Berihun Getahun, Mulugeta Atnaf Tiruneh, Richard G. F. Visser
Abstract:
Resource-poor farmers practice low-input farming systems, and yet, most breeding programs give less attention to this huge farming system, which serves as a source of food and income for several people in developing countries. The high-input conventional breeding system appears to have failed to adequately meet the needs and requirements of 'difficult' environments operating under this system. Moreover, the unavailability of resources for crop production is getting for their peaks, the environment is maltreated by excessive use of agrochemicals, crop productivity reaches its plateau stage, particularly in the developed nations, the world population is increasing, and food shortage sustained to persist for poor societies. In various parts of the world, genetic gain at the farmers' level remains low which could be associated with low adoption of crop varieties, which have been developed under high input systems. Farmers usually use their local varieties and apply minimum inputs as a risk-avoiding and cost-minimizing strategy. This evidence indicates that the conventional high-input plant breeding system has failed to feed the world population, and the world is moving further away from the United Nations' goals of ending hunger, food insecurity, and malnutrition. In this review, we discussed the rationality of focused breeding programs for low-input farming systems and, the technical aspect of crop breeding that accommodates future food needs and its significance for developing countries in the decreasing scenario of resources required for crop production. To this end, the application of exotic introgression techniques like polyploidization, pan-genomics, comparative genomics, and De novo domestication as a pre-breeding technique has been discussed in the review to exploit the untapped genetic diversity of the crop wild relatives (CWRs). Desired recombinants developed at the pre-breeding stage are exploited through appropriate breeding approaches such as evolutionary plant breeding (EPB), rhizosphere-related traits breeding, and participatory plant breeding approaches. Populations advanced through evolutionary breeding like composite cross populations (CCPs) and rhizosphere-associated traits breeding approach that provides opportunities for improving abiotic and biotic soil stress, nutrient acquisition capacity, and crop microbe interaction in improved varieties have been reviewed. Overall, we conclude that low input farming system is a huge farming system that requires distinctive breeding approaches, and the exotic pre-breeding introgression techniques and the appropriate breeding approaches which deploy the skills and knowledge of both breeders and farmers are vital to develop heterogeneous landrace populations, which are effective for farmers practicing low input farming across the world.Keywords: low input farming, evolutionary plant breeding, composite cross population, participatory plant breeding
Procedia PDF Downloads 50648 Investigation of Cavitation in a Centrifugal Pump Using Synchronized Pump Head Measurements, Vibration Measurements and High-Speed Image Recording
Authors: Simon Caba, Raja Abou Ackl, Svend Rasmussen, Nicholas E. Pedersen
Abstract:
It is a challenge to directly monitor cavitation in a pump application during operation because of a lack of visual access to validate the presence of cavitation and its form of appearance. In this work, experimental investigations are carried out in an inline single-stage centrifugal pump with optical access. Hence, it gives the opportunity to enhance the value of CFD tools and standard cavitation measurements. Experiments are conducted using two impellers running in the same volute at 3000 rpm and the same flow rate. One of the impellers used is optimized for lower NPSH₃% by its blade design, whereas the other one is manufactured using a standard casting method. The cavitation is detected by pump performance measurements, vibration measurements and high-speed image recordings. The head drop and the pump casing vibration caused by cavitation are correlated with the visual appearance of the cavitation. The vibration data is recorded in an axial direction of the impeller using accelerometers recording at a sample rate of 131 kHz. The vibration frequency domain data (up to 20 kHz) and the time domain data are analyzed as well as the root mean square values. The high-speed recordings, focusing on the impeller suction side, are taken at 10,240 fps to provide insight into the flow patterns and the cavitation behavior in the rotating impeller. The videos are synchronized with the vibration time signals by a trigger signal. A clear correlation between cloud collapses and abrupt peaks in the vibration signal can be observed. The vibration peaks clearly indicate cavitation, especially at higher NPSHA values where the hydraulic performance is not affected. It is also observed that below a certain NPSHA value, the cavitation started in the inlet bend of the pump. Above this value, cavitation occurs exclusively on the impeller blades. The impeller optimized for NPSH₃% does show a lower NPSH₃% than the standard impeller, but the head drop starts at a higher NPSHA value and is more gradual. Instabilities in the head drop curve of the optimized impeller were observed in addition to a higher vibration level. Furthermore, the cavitation clouds on the suction side appear more unsteady when using the optimized impeller. The shape and location of the cavitation are compared to 3D fluid flow simulations. The simulation results are in good agreement with the experimental investigations. In conclusion, these investigations attempt to give a more holistic view on the appearance of cavitation by comparing the head drop, vibration spectral data, vibration time signals, image recordings and simulation results. Data indicates that a criterion for cavitation detection could be derived from the vibration time-domain measurements, which requires further investigation. Usually, spectral data is used to analyze cavitation, but these investigations indicate that the time domain could be more appropriate for some applications.Keywords: cavitation, centrifugal pump, head drop, high-speed image recordings, pump vibration
Procedia PDF Downloads 179647 The Design of a Computer Simulator to Emulate Pathology Laboratories: A Model for Optimising Clinical Workflows
Authors: M. Patterson, R. Bond, K. Cowan, M. Mulvenna, C. Reid, F. McMahon, P. McGowan, H. Cormican
Abstract:
This paper outlines the design of a simulator to allow for the optimisation of clinical workflows through a pathology laboratory and to improve the laboratory’s efficiency in the processing, testing, and analysis of specimens. Often pathologists have difficulty in pinpointing and anticipating issues in the clinical workflow until tests are running late or in error. It can be difficult to pinpoint the cause and even more difficult to predict any issues which may arise. For example, they often have no indication of how many samples are going to be delivered to the laboratory that day or at a given hour. If we could model scenarios using past information and known variables, it would be possible for pathology laboratories to initiate resource preparations, e.g. the printing of specimen labels or to activate a sufficient number of technicians. This would expedite the clinical workload, clinical processes and improve the overall efficiency of the laboratory. The simulator design visualises the workflow of the laboratory, i.e. the clinical tests being ordered, the specimens arriving, current tests being performed, results being validated and reports being issued. The simulator depicts the movement of specimens through this process, as well as the number of specimens at each stage. This movement is visualised using an animated flow diagram that is updated in real time. A traffic light colour-coding system will be used to indicate the level of flow through each stage (green for normal flow, orange for slow flow, and red for critical flow). This would allow pathologists to clearly see where there are issues and bottlenecks in the process. Graphs would also be used to indicate the status of specimens at each stage of the process. For example, a graph could show the percentage of specimen tests that are on time, potentially late, running late and in error. Clicking on potentially late samples will display more detailed information about those samples, the tests that still need to be performed on them and their urgency level. This would allow any issues to be resolved quickly. In the case of potentially late samples, this could help to ensure that critically needed results are delivered on time. The simulator will be created as a single-page web application. Various web technologies will be used to create the flow diagram showing the workflow of the laboratory. JavaScript will be used to program the logic, animate the movement of samples through each of the stages and to generate the status graphs in real time. This live information will be extracted from an Oracle database. As well as being used in a real laboratory situation, the simulator could also be used for training purposes. ‘Bots’ would be used to control the flow of specimens through each step of the process. Like existing software agents technology, these bots would be configurable in order to simulate different situations, which may arise in a laboratory such as an emerging epidemic. The bots could then be turned on and off to allow trainees to complete the tasks required at that step of the process, for example validating test results.Keywords: laboratory-process, optimization, pathology, computer simulation, workflow
Procedia PDF Downloads 286646 Overcoming Reading Barriers in an Inclusive Mathematics Classroom with Linguistic and Visual Support
Authors: A. Noll, J. Roth, M. Scholz
Abstract:
The importance of written language in a democratic society is non-controversial. Students with physical, learning, cognitive or developmental disabilities often have difficulties in understanding information which is presented in written language only. These students suffer from obstacles in diverse domains. In order to reduce such barriers in educational as well as in out-of-school areas, access to written information must be facilitated. Readability can be enhanced by linguistic simplifications like the application of easy-to-read language. Easy-to-read language shall help people with disabilities to participate socially and politically in society. The authors state, for example, that only short simple words should be used, whereas the occurrence of complex sentences should be avoided. So far, these guidelines were not empirically proved. Another way to reduce reading barriers is the use of visual support, for example, symbols. A symbol conveys, in contrast to a photo, a single idea or concept. Little empirical data about the use of symbols to foster the readability of texts exist. Nevertheless, a positive influence can be assumed, e.g., because of the multimedia principle. It indicates that people learn better from words and pictures than from words alone. A qualitative Interview and Eye-Tracking-Study, which was conducted by the authors, gives cause for the assumption that besides the illustration of single words, the visualization of complete sentences may be helpful. Thus, the effect of photos, which illustrate the content of complete sentences, is also investigated in this study. This leads us to the main research question which was focused on: Does the use of easy-to-read language and/or enriching text with symbols or photos facilitate pupils’ comprehension of learning tasks? The sample consisted of students with learning difficulties (N = 144) and students without SEN (N = 159). The students worked on the tasks, which dealt with introducing fractions, individually. While experimental group 1 received a linguistically simplified version of the tasks, experimental group 2 worked with a variation which was linguistically simplified and furthermore, the keywords of the tasks were visualized by symbols. Experimental group 3 worked on exercises which were simplified by easy-to-read-language and the content of the whole sentences was illustrated by photos. Experimental group 4 received a not simplified version. The participants’ reading ability and their IQ was elevated beforehand to build four comparable groups. There is a significant effect of the different setting on the students’ results F(3,140) = 2,932; p = 0,036*. A post-hoc-analyses with multiple comparisons shows that this significance results from the difference between experimental group 3 and 4. The students in the group easy-to-read language plus photos worked on the exercises significantly more successfully than the students who worked in the group with no simplifications. Further results which refer, among others, to the influence of the students reading ability will be presented at the ICERI 2018.Keywords: inclusive education, mathematics education, easy-to-read language, photos, symbols, special educational needs
Procedia PDF Downloads 154645 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence
Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang
Abstract:
Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sublfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of filters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-filter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying filter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The significance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II filters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the filter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic filter, aspect ratios (AR) ranging from 1 to 16 in LES filters are evaluated. The findings highlight the DDM's proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as filter anisotropy intensify, the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all filter-anisotropy scenarios. The findings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence
Procedia PDF Downloads 75644 Embracing the Uniqueness and Potential of Each Child: Moving Theory to Practice
Authors: Joy Chadwick
Abstract:
This Study of Teaching and Learning (SoTL) research focused on the experiences of teacher candidates involved in an inclusive education methods course within a four-year direct entry Bachelor of Education program. The placement of this course within the final fourteen-week practicum semester is designed to facilitate deeper theory-practice connections between effective inclusive pedagogical knowledge and the real life of classroom teaching. The course focuses on supporting teacher candidates to understand that effective instruction within an inclusive classroom context must be intentional, responsive, and relational. Diversity is situated not as exceptional but rather as expected. This interpretive qualitative study involved the analysis of twenty-nine teacher candidate reflective journals and six individual teacher candidate semi-structured interviews. The journal entries were completed at the start of the semester and at the end of the semester with the intent of having teacher candidates reflect on their beliefs of what it means to be an effective inclusive educator and how the course and practicum experiences impacted their understanding and approaches to teaching in inclusive classrooms. The semi-structured interviews provided further depth and context to the journal data. The journals and interview transcripts were coded and themed using NVivo software. The findings suggest that instructional frameworks such as universal design for learning (UDL), differentiated instruction (DI), response to intervention (RTI), social emotional learning (SEL), and self-regulation supported teacher candidate’s abilities to meet the needs of their students more effectively. Course content that focused on specific exceptionalities also supported teacher candidates to be proactive rather than reactive when responding to student learning challenges. Teacher candidates also articulated the importance of reframing their perspective about students in challenging moments and that seeing the individual worth of each child was integral to their approach to teaching. A persisting question for teacher educators exists as to what pedagogical knowledge and understanding is most relevant in supporting future teachers to be effective at planning for and embracing the diversity of student needs within classrooms today. This research directs us to consider the critical importance of addressing personal attributes and mindsets of teacher candidates regarding children as well as considering instructional frameworks when designing coursework. Further, the alignment of an inclusive education course during a teaching practicum allows for an iterative approach to learning. The practical application of course concepts while teaching in a practicum allows for a deeper understanding of instructional frameworks, thus enhancing the confidence of teacher candidates. Research findings have implications for teacher education programs as connected to inclusive education methods courses, practicum experiences, and overall teacher education program design.Keywords: inclusion, inclusive education, pre-service teacher education, practicum experiences, teacher education
Procedia PDF Downloads 68643 Physical Contact Modulation of Macrophage-Mediated Anti-Inflammatory Response in Osteoimmune Microenvironment by Pollen-Like Nanoparticles
Authors: Qing Zhang, Janak L. Pathak, Macro N. Helder, Richard T. Jaspers, Yin Xiao
Abstract:
Introduction: Nanomaterial-based bone regeneration is greatly influenced by the immune microenvironment. Tissue-engineered nanomaterials mediate the inflammatory response of macrophages to regulate bone regeneration. Silica nanoparticles have been widely used in tissue engineering-related preclinical studies. However, the effect of topological features on the surface of silica nanoparticles on the immune response of macrophages remains unknown. Purposes: The aims of this research are to compare the influences of normal and pollen-like silica nano-surface topography on macrophage immune responses and to obtain insight into their potential regulatory mechanisms. Method: Macrophages (RAW 264.7 cells) were exposed to mesoporous silica nanoparticles with normal morphology (MSNs) and pollen-like morphology (PMSNs). RNA-seq, RT-qPCR, and LSCM were used to assess the changes in expression levels of immune response-related genes and proteins. SEM and TEM were executed to evaluate the contact and adherence of silica nanoparticles by macrophages. For the assessment of the immunomodulation-mediated osteogenic potential, BMSCs were cultured with conditioned medium (CM) from LPS pre-stimulated macrophage cultures treated with MSNs or PMSNs. Osteoimmunomodulatory potential of MSNs and PMSNs in vivo was tested in a mouse cranial bone osteolysis model. Results: The results of the RNA-seq, RT-qPCR, and LSCM assays showed that PMSNs inhibited the expression of pro-inflammatory genes and proteins in macrophages. SEM images showed distinct macrophage membrane surface binding patterns of MSNs and PMSNs. MSNs were more evenly dispersed across the macrophage cell membrane, while PMSNs were aggregated. PMSNs-induced macrophage anti-inflammatory response was associated with upregulation of the cell surface receptor CD28 and inhibition of ERK phosphorylation. TEM images showed that both MSNs and PMSNs could be phagocytosed by macrophages, and inhibiting nanoparticle phagocytosis did not affect the expression of anti-inflammatory genes and proteins. Moreover, PMSNs-induced conditioned medium from macrophages enhanced BMP-2 expression and osteogenic differentiation mBMSCs. Similarly, PMSNs prevented LPS-induced bone resorption via downregulation of inflammatory reaction. Conclusions: PMSNs can promote bone regeneration by modulating osteoimmunological processes through surface topography. The study offers insights into how surface physical contact cues can modulate the regulation of osteoimmunology and provides a basis for the application of nanoparticles with pollen-like morphology to affect immunomodulation in bone tissue engineering and regeneration.Keywords: physical contact, osteoimmunology, macrophages, silica nanoparticles, surface morphology, membrane receptor, osteogenesis, inflammation
Procedia PDF Downloads 61642 Hybrid Solutions in Physicochemical Processes for the Removal of Turbidity in Andean Reservoirs
Authors: María Cárdenas Gaudry, Gonzalo Ramces Fano Miranda
Abstract:
Sediment removal is very important in the purification of water, not only for reasons of visual perception but also because of its association with odor and taste problems. The Cuchoquesera reservoir, which is in the Andean region of Ayacucho (Peru) at an altitude of 3,740 meters above sea level, visually presents suspended particles and organic impurities indicating that it contains water of dubious quality to deduce that it is suitable for direct consumption of human beings. In order to quantitatively know the degree of impurities, water quality monitoring was carried out from February to August 2018, in which four sampling stations were established in the reservoir. The selected measured parameters were electrical conductivity, total dissolved solids, pH, color, turbidity, and sludge volume. The indicators of the studied parameters exceed the permissible limits except for electrical conductivity (190 μS/cm) and total dissolved solids (255 mg/L). In this investigation, the best combination and the optimal doses of reagents were determined that allowed the removal of sediments from the waters of the Cuchoquesera reservoir, through the physicochemical process of coagulation-flocculation. In order to improve this process during the rainy season, six combinations of reagents were evaluated, made up of three coagulants (ferric chloride, ferrous sulfate, and aluminum sulfate) and two natural flocculants: prickly pear powder (Opuntia ficus-indica) and tara gum (Caesalpinia spinoza). For each combination of reagents, jar tests were developed following the central composite experimental design (CCED), where the design factors were the doses of coagulant and flocculant and the initial turbidity. The results of the jar tests were adjusted to mathematical models, obtaining that to treat the water from the Cuchoquesera reservoir, with a turbidity of 150 UTN and a color of 137 U Pt-Co, 27.9 mg/L of the coagulant aluminum sulfate with 3 mg/L of the natural tara gum flocculant to produce a purified water quality of 1.7 UTN of turbidity and 3.2 U Pt-Co of apparent color. The estimated cost of the dose of coagulant and flocculant found was 0.22 USD/m³. This is how “grey-green” technologies can be used as a combination in nature-based solutions in water treatment, in this case, to achieve potability, making it more sustainable, especially economically, if green technology is available at the site of application of the nature-based hybrid solution. This research is a demonstration of the compatibility of natural coagulants/flocculants with other treatment technologies in the integrated/hybrid treatment process, such as the possibility of hybridizing natural coagulants with other types of coagulants.Keywords: prickly pear powder, tara gum, nature-based solutions, aluminum sulfate, jar test, turbidity, coagulation, flocculation
Procedia PDF Downloads 108641 Predicting the Exposure Level of Airborne Contaminants in Occupational Settings via the Well-Mixed Room Model
Authors: Alireza Fallahfard, Ludwig Vinches, Stephane Halle
Abstract:
In the workplace, the exposure level of airborne contaminants should be evaluated due to health and safety issues. It can be done by numerical models or experimental measurements, but the numerical approach can be useful when it is challenging to perform experiments. One of the simplest models is the well-mixed room (WMR) model, which has shown its usefulness to predict inhalation exposure in many situations. However, since the WMR is limited to gases and vapors, it cannot be used to predict exposure to aerosols. The main objective is to modify the WMR model to expand its application to exposure scenarios involving aerosols. To reach this objective, the standard WMR model has been modified to consider the deposition of particles by gravitational settling and Brownian and turbulent deposition. Three deposition models were implemented in the model. The time-dependent concentrations of airborne particles predicted by the model were compared to experimental results conducted in a 0.512 m3 chamber. Polystyrene particles of 1, 2, and 3 µm in aerodynamic diameter were generated with a nebulizer under two air changes per hour (ACH). The well-mixed condition and chamber ACH were determined by the tracer gas decay method. The mean friction velocity on the chamber surfaces as one of the input variables for the deposition models was determined by computational fluid dynamics (CFD) simulation. For the experimental procedure, the particles were generated until reaching the steady-state condition (emission period). Then generation stopped, and concentration measurements continued until reaching the background concentration (decay period). The results of the tracer gas decay tests revealed that the ACHs of the chamber were: 1.4 and 3.0, and the well-mixed condition was achieved. The CFD results showed the average mean friction velocity and their standard deviations for the lowest and highest ACH were (8.87 ± 0.36) ×10-2 m/s and (8.88 ± 0.38) ×10-2 m/s, respectively. The numerical results indicated the difference between the predicted deposition rates by the three deposition models was less than 2%. The experimental and numerical aerosol concentrations were compared in the emission period and decay period. In both periods, the prediction accuracy of the modified model improved in comparison with the classic WMR model. However, there is still a difference between the actual value and the predicted value. In the emission period, the modified WMR results closely follow the experimental data. However, the model significantly overestimates the experimental results during the decay period. This finding is mainly due to an underestimation of the deposition rate in the model and uncertainty related to measurement devices and particle size distribution. Comparing the experimental and numerical deposition rates revealed that the actual particle deposition rate is significant, but the deposition mechanisms considered in the model were ten times lower than the experimental value. Thus, particle deposition was significant and will affect the airborne concentration in occupational settings, and it should be considered in the airborne exposure prediction model. The role of other removal mechanisms should be investigated.Keywords: aerosol, CFD, exposure assessment, occupational settings, well-mixed room model, zonal model
Procedia PDF Downloads 103640 Application of Industrial Ecology to the INSPIRA Zone: Territory Planification and New Activities
Authors: Mary Hanhoun, Jilla Bamarni, Anne-Sophie Bougard
Abstract:
INSPIR’ECO is a 18-month research and innovation project that aims to specify and develop a tool to offer new services for industrials and territorial planners/managers based on Industrial Ecology Principles. This project is carried out on the territory of Salaise Sablons and the services are designed to be deployed on other territories. Salaise-Sablons area is located in the limit of 5 departments on a major European economic axis multimodal traffic (river, rail and road). The perimeter of 330 ha includes 90 hectares occupied by 20 companies, with a total of 900 jobs, and represents a significant potential basin of development. The project involves five multi-disciplinary partners (Syndicat Mixte INSPIRA, ENGIE, IDEEL, IDEAs Laboratory and TREDI). INSPIR’ECO project is based on the principles that local stakeholders need services to pool, share their activities/equipment/purchases/materials. These services aims to : 1. initiate and promote exchanges between existing companies and 2. identify synergies between pre-existing industries and future companies that could be implemented in INSPIRA. These eco-industrial synergies can be related to: the recovery / exchange of industrial flows (industrial wastewater, waste, by-products, etc.); the pooling of business services (collective waste management, stormwater collection and reuse, transport, etc.); the sharing of equipments (boiler, steam production, wastewater treatment unit, etc.) or resources (splitting jobs cost, etc.); and the creation of new activities (interface activities necessary for by-product recovery, development of products or services from a newly identified resource, etc.). These services are based on IT tool used by the interested local stakeholders that intends to allow local stakeholders to take decisions. Thus, this IT tool: - include an economic and environmental assessment of each implantation or pooling/sharing scenarios for existing or further industries; - is meant for industrial and territorial manager/planners - is designed to be used for each new industrial project. - The specification of the IT tool is made through an agile process all along INSPIR’ECO project fed with: - Users expectations thanks to workshop sessions where mock-up interfaces are displayed; - Data availability based on local and industrial data inventory. These input allow to specify the tool not only with technical and methodological constraints (notably the ones from economic and environmental assessments) but also with data availability and users expectations. A feedback on innovative resource management initiatives in port areas has been realized in the beginning of the project to feed the designing services step.Keywords: development opportunities, INSPIR’ECO, INSPIRA, industrial ecology, planification, synergy identification
Procedia PDF Downloads 163639 Bayesian Structural Identification with Systematic Uncertainty Using Multiple Responses
Authors: André Jesus, Yanjie Zhu, Irwanda Laory
Abstract:
Structural health monitoring is one of the most promising technologies concerning aversion of structural risk and economic savings. Analysts often have to deal with a considerable variety of uncertainties that arise during a monitoring process. Namely the widespread application of numerical models (model-based) is accompanied by a widespread concern about quantifying the uncertainties prevailing in their use. Some of these uncertainties are related with the deterministic nature of the model (code uncertainty) others with the variability of its inputs (parameter uncertainty) and the discrepancy between a model/experiment (systematic uncertainty). The actual process always exhibits a random behaviour (observation error) even when conditions are set identically (residual variation). Bayesian inference assumes that parameters of a model are random variables with an associated PDF, which can be inferred from experimental data. However in many Bayesian methods the determination of systematic uncertainty can be problematic. In this work systematic uncertainty is associated with a discrepancy function. The numerical model and discrepancy function are approximated by Gaussian processes (surrogate model). Finally, to avoid the computational burden of a fully Bayesian approach the parameters that characterise the Gaussian processes were estimated in a four stage process (modular Bayesian approach). The proposed methodology has been successfully applied on fields such as geoscience, biomedics, particle physics but never on the SHM context. This approach considerably reduces the computational burden; although the extent of the considered uncertainties is lower (second order effects are neglected). To successfully identify the considered uncertainties this formulation was extended to consider multiple responses. The efficiency of the algorithm has been tested on a small scale aluminium bridge structure, subjected to a thermal expansion due to infrared heaters. Comparison of its performance with responses measured at different points of the structure and associated degrees of identifiability is also carried out. A numerical FEM model of the structure was developed and the stiffness from its supports is considered as a parameter to calibrate. Results show that the modular Bayesian approach performed best when responses of the same type had the lowest spatial correlation. Based on previous literature, using different types of responses (strain, acceleration, and displacement) should also improve the identifiability problem. Uncertainties due to parametric variability, observation error, residual variability, code variability and systematic uncertainty were all recovered. For this example the algorithm performance was stable and considerably quicker than Bayesian methods that account for the full extent of uncertainties. Future research with real-life examples is required to fully access the advantages and limitations of the proposed methodology.Keywords: bayesian, calibration, numerical model, system identification, systematic uncertainty, Gaussian process
Procedia PDF Downloads 326