Search results for: BIM application
2438 Development of Medical Intelligent Process Model Using Ontology Based Technique
Authors: Emmanuel Chibuogu Asogwa, Tochukwu Sunday Belonwu
Abstract:
An urgent demand for creative solutions has been created by the rapid expansion of medical knowledge, the complexity of patient care, and the requirement for more precise decision-making. As a solution to this problem, the creation of a Medical Intelligent Process Model (MIPM) utilizing ontology-based appears as a promising way to overcome this obstacle and unleash the full potential of healthcare systems. The development of a Medical Intelligent Process Model (MIPM) using ontology-based techniques is motivated by a lack of quick access to relevant medical information and advanced tools for treatment planning and clinical decision-making, which ontology-based techniques can provide. The aim of this work is to develop a structured and knowledge-driven framework that leverages ontology, a formal representation of domain knowledge, to enhance various aspects of healthcare. Object-Oriented Analysis and Design Methodology (OOADM) were adopted in the design of the system as we desired to build a usable and evolvable application. For effective implementation of this work, we used the following materials/methods/tools: the medical dataset for the test of our model in this work was obtained from Kaggle. The ontology-based technique was used with Confusion Matrix, MySQL, Python, Hypertext Markup Language (HTML), Hypertext Preprocessor (PHP), Cascaded Style Sheet (CSS), JavaScript, Dreamweaver, and Fireworks. According to test results on the new system using Confusion Matrix, both the accuracy and overall effectiveness of the medical intelligent process significantly improved by 20% compared to the previous system. Therefore, using the model is recommended for healthcare professionals.Keywords: ontology-based, model, database, OOADM, healthcare
Procedia PDF Downloads 782437 Kou Jump Diffusion Model: An Application to the SP 500; Nasdaq 100 and Russell 2000 Index Options
Authors: Wajih Abbassi, Zouhaier Ben Khelifa
Abstract:
The present research points towards the empirical validation of three options valuation models, the ad-hoc Black-Scholes model as proposed by Berkowitz (2001), the constant elasticity of variance model of Cox and Ross (1976) and the Kou jump-diffusion model (2002). Our empirical analysis has been conducted on a sample of 26,974 options written on three indexes, the S&P 500, Nasdaq 100 and the Russell 2000 that were negotiated during the year 2007 just before the sub-prime crisis. We start by presenting the theoretical foundations of the models of interest. Then we use the technique of trust-region-reflective algorithm to estimate the structural parameters of these models from cross-section of option prices. The empirical analysis shows the superiority of the Kou jump-diffusion model. This superiority arises from the ability of this model to portray the behavior of market participants and to be closest to the true distribution that characterizes the evolution of these indices. Indeed the double-exponential distribution covers three interesting properties that are: the leptokurtic feature, the memory less property and the psychological aspect of market participants. Numerous empirical studies have shown that markets tend to have both overreaction and under reaction over good and bad news respectively. Despite of these advantages there are not many empirical studies based on this model partly because probability distribution and option valuation formula are rather complicated. This paper is the first to have used the technique of nonlinear curve-fitting through the trust-region-reflective algorithm and cross-section options to estimate the structural parameters of the Kou jump-diffusion model.Keywords: jump-diffusion process, Kou model, Leptokurtic feature, trust-region-reflective algorithm, US index options
Procedia PDF Downloads 4292436 Facile Fabrication of TiO₂NT/Fe₂O₃@Ag₂CO₃ Nanocomposite and Its Highly Efficient Visible Light Photocatalytic and Antibacterial Activity
Authors: Amal A. Al-Kahlawy, Heba H. El-Maghrabi
Abstract:
Due to the increasing need to environment protection in real time need to energize new materials are under extensive investigations. Between others, TiO2 nanotubes (TNTs) nanocomposite with iron oxide and silver carbonate, are promising alternatives as high-efficiency visible light photocatalyst due to their unique properties and their superior charge transport properties. Our efforts in this domain aim the construction of novel nanocomposite of TiO2NT/Fe2O3@Ag2CO3. The structure, surface morphology, chemical composition and optical properties were characterized by X-ray diffraction (XRD), Raman, Fourier-transform infrared spectroscopy (FTIR), scanning electron microscopy (SEM), energy dispersive X-ray spectrometer (EDS), transmission electron microscopy (TEM), selected area electron diffraction (SAED) and UV–vis diffuse reflectance spectroscopy (DRS). XRD results confirm the interaction of TiO2-NT with iron oxide. This novel nanocomposite shows remarkably enhanced performance for phenol compounds photodegradation. The experimental data shows a promising photocatalytic activity. In particular, a maximum value of 450 mg/g was removed within 60 min at solar light irradiation with degradation efficiency of 99.5%. The high photocatalytic activity of the nanocomposite is found to be related to the increased adsorption toward chemical species, enhanced light absorption and efficient charge separation and transfer. Finally, the designed TiO2NT/Fe2O3@Ag2CO3 nanocomposite has a great degree of sustainability and could has a potential application for the industrial treatment of wastewater containing toxic organic materials.Keywords: nanocomposite, photocatalyst, solar energy, titanium dioxide nanotubes
Procedia PDF Downloads 2472435 Natural User Interface Adapter: Enabling Natural User Interface for Non-Natural User Interface Applications
Authors: Vijay Kumar Kolagani, Yingcai Xiao
Abstract:
Adaptation of Natural User Interface (NUI) has been slow and limited. NUI devices like Microsoft’s Kinect and Ultraleap’s Leap Motion can only interact with a handful applications that were specifically designed and implemented for them. A NUI device just can’t be used to directly control millions of applications that are not designed to take NUI input. This is in the similar situation like the adaptation of color TVs. At the early days of color TV, the broadcasting format was in RGB, which was not viewable by blackand-white TVs. TV broadcasters were reluctant to produce color programs due to limited viewership. TV viewers were reluctant to buy color TVs because there were limited programs to watch. Color TV’s breakthrough moment came after the adaptation of NTSC standard which allowed color broadcasts to be compatible with the millions of existing black-and-white TVs. This research presents a framework to use NUI devices to control existing non-NUI applications without reprogramming them. The methodology is to create an adapter to convert input from NUI devices into input compatible with that generated by CLI (Command Line Input) and GUI (Graphical User Interface) devices. The CLI/GUI compatible input is then sent to the active application through the operating system just like any input from a CLI/GUI device to control the non-NUI program that the user is controlling. A sample adapter has been created to convert input from Kinect to keyboard strokes, so one can use the input from Kinect to control any applications that take keyboard input, such as Microsoft’s PowerPoint. When the users use the adapter to control their PowerPoint presentations, they can free themselves from standing behind a computer to use its keyboard and can roam around in front of the audience to use hand gestures to control the PowerPoint. It is hopeful such adapters can accelerate the adaptation of NUI devices.Keywords: command line input, graphical user interface, human computer interaction, natural user interface, NUI adapter
Procedia PDF Downloads 142434 Application of Fuzzy TOPSIS in Evaluating Green Transportation Options for Dhaka Megacity
Authors: Md. Moniruzzaman, Thirayoot Limanond
Abstract:
Being the most visible indicator, the transport system of a city points out how developed the city is. Dhaka megacity holds a mixed composition of motorized and non-motorized modes of transport and the number of vehicle figure is escalating over times. And this obviously poses associated environmental costs like air pollution, noise etc. which is degrading the quality of life in the city. Eventually sustainable transport or more importantly green transport from environmental point of view has become a prime choice to the transport professionals in order to cope up the crisis. Currently the city authority is planning to execute such sustainable transport systems that could serve the pressing demand of the present and meet the future needs effectively. This study focuses on the selection and evaluation of green transportation systems among potential alternatives on a priority basis. In this paper, Fuzzy TOPSIS - a multi-criteria decision method is presented to find out the most prioritized alternative. In the first step, Twenty-one individual specific criteria for sustainability assessment are selected. In the following step, experts provide linguistic ratings to the potential alternatives with respect to the selected criteria. The approach is used to generate aggregate scores for sustainability assessment and selection of the best alternative. In the third step, a sensitivity analysis is performed to understand the influence of criteria weights on the decision making process. The key strength of fuzzy TOPSIS approach is its practical applicability having a generation of good quality solution even under uncertainty.Keywords: green transport, multi-criteria decision approach, urban transportation system, sustainability assessment, fuzzy theory, uncertainty
Procedia PDF Downloads 2902433 Application of Computational Chemistry for Searching Anticancer Derivatives of 2-Phenazinamines as Bcr-Abl Tyrosine Kinase Inhibitors
Authors: Gajanan M. Sonwane
Abstract:
The computational studies on 2-phenazinamines with their protein targets have been carried out to design compounds with potential anticancer activity. This strategy of designing compounds possessing selectivity over specific tyrosine kinase has been achieved through G-QSAR and molecular docking studies. The objective of this research has been to design newer 2-phenazinamine derivatives as Bcr-Abl tyrosine kinase inhibitors by G-QSAR, molecular docking studies followed by wet-lab studies along with evaluation of their anticancer potential. Computational chemistry was done by using VLife MDS 4.3 and Autodock 4.2 followed by wet-lab experiments for synthesizing 2-phenazinamine derivatives. The chemical structures of ligands in 2D were drawn by employing Chemdraw 2D Ultra 8.0 and were converted into 3D. These were optimized by using a semi-empirical method called MOPAC. The protein structure was retrieved from RCSC protein data bank as a PDB file. The binding interactions of protein and ligands were done by using PYMOL. The molecular properties of the designed compounds were predicted in silico by using Osiris property explorer. The parent compound 2-phenazinamine was synthesized by reduction of 2, 4-dinitro-N-phenyl-benzenamine in the presence of tin chloride followed by cyclization in the presence of nitrobenzene and magnesium sulfate. The derivatization at the amino function of 2-phenazinamine was performed by treating parent compound with various aldehydes in the presence of dicyclohexylcarbodiimide (DCC) and urea to afford 2-(2-chlorophenyl)-3-(phenazine-2-yl) thiazolidine-4-one. Synthesized 39 novel derivatives of 2-phenazinamine and performed antioxidant activity, anti antiproliferative on the bulb of onion and anticancer activity on cell line showing significant competition with marked blockbuster drug imatinib.Keywords: computer-aided drug design, tyrosin kinases, anticancer, docking
Procedia PDF Downloads 1402432 Potassium-Phosphorus-Nitrogen Detection and Spectral Segmentation Analysis Using Polarized Hyperspectral Imagery and Machine Learning
Authors: Nicholas V. Scott, Jack McCarthy
Abstract:
Military, law enforcement, and counter terrorism organizations are often tasked with target detection and image characterization of scenes containing explosive materials in various types of environments where light scattering intensity is high. Mitigation of this photonic noise using classical digital filtration and signal processing can be difficult. This is partially due to the lack of robust image processing methods for photonic noise removal, which strongly influence high resolution target detection and machine learning-based pattern recognition. Such analysis is crucial to the delivery of reliable intelligence. Polarization filters are a possible method for ambient glare reduction by allowing only certain modes of the electromagnetic field to be captured, providing strong scene contrast. An experiment was carried out utilizing a polarization lens attached to a hyperspectral imagery camera for the purpose of exploring the degree to which an imaged polarized scene of potassium, phosphorus, and nitrogen mixture allows for improved target detection and image segmentation. Preliminary imagery results based on the application of machine learning algorithms, including competitive leaky learning and distance metric analysis, to polarized hyperspectral imagery, suggest that polarization filters provide a slight advantage in image segmentation. The results of this work have implications for understanding the presence of explosive material in dry, desert areas where reflective glare is a significant impediment to scene characterization.Keywords: explosive material, hyperspectral imagery, image segmentation, machine learning, polarization
Procedia PDF Downloads 1422431 Spontaneous Generation of Wrinkled Patterns on pH-Sensitive Smart-Hydrogel Films
Authors: Carmen M. Gonzalez-Henriquez, Mauricio A. Sarabia-Vallejos, Juan Rodriguez-Hernandez
Abstract:
DMAEMA, as a monomer, has been widely studied and used in several application fields due to their pH-sensitive capacity (tertiary amine protonation), being relevant in the biomedical area as a potential carrier for drugs focused on the treatment of genetic or acquired diseases (efficient gene transfection), among others. Additionally, the inhibition of bacterial growth and, therefore, their antimicrobial activity, can be used as dual-functional antifogging/antimicrobial polymer coatings. According to their interesting physicochemical characteristics and biocompatible properties, DMAEMA was used as a monomer to synthesize a smart pH-sensitive hydrogel, namely poly(HEMA-co-PEGDA575-co-DMAEMA). Thus, different mole ratios (ranging from 5:1:0 to 0:1:5, according to the mole ratio between HEMA, PEGDA, and DEAEMA, respectively) were used in this research. The surface patterns formed via a two-step polymerization (redox- and photo-polymerization) were first chemically studied via 1H-NMR and elemental analysis. Secondly, the samples were morphologically analyzed by using Field-Emission Scanning Electron Microscopy (FE-SEM) and Atomic Force Microscopy (AFM) techniques. Then, a particular relation between HEMA, PEGDA, and DEAEMA (0:1:5) was also characterized at three different pH (5.4, 7.4 and 8.3). The hydrodynamic radius and zeta potential of the micro-hydrogel particles (emulsion) were carried out as a possible control for morphology, exploring the effect that produces hydrogel micelle dimensions in the wavelength, height, and roughness of the wrinkled patterns. Finally, contact angle and cross-hatch adhesion test was carried out for the hydrogels supported on glass using TSM-silanized surfaces in order to measure their mechanical properties.Keywords: wrinkled patterns, smart pH-sensitive hydrogels, hydrogel micelle diameter, adhesion tests
Procedia PDF Downloads 2062430 Experimental and Numerical Studies of Droplet Formation
Authors: Khaled Al-Badani, James Ren, Lisa Li, David Allanson
Abstract:
Droplet formation is an important process in many engineering systems and manufacturing procedures, which includes welding, biotechnologies, 3D printing, biochemical, biomedical fields and many more. The volume and the characteristics of droplet formation are generally depended on various material properties, microfluidics and fluid mechanics considerations. Hence, a detailed investigation of this process, with the aid of numerical computational tools, are essential for future design optimization and process controls of many engineering systems. This will also improve the understanding of changes in the properties and the structures of materials, during the formation of the droplet, which is important for new material developments to achieve different functions, pending the requirements of the application. For example, the shape of the formed droplet is critical for the function of some final products, such as the welding nugget during Capacitor Discharge Welding process, or PLA 3D printing, etc. Although, most academic journals on droplet formation, focused on issued with material transfer rate, surface tension and residual stresses, the general emphasis on the characteristics of droplet shape has been overlooked. The proposed work for this project will examine theoretical methodologies, experimental techniques, and numerical modelling, using ANSYS FLUENT, to critically analyse and highlight optimization methods regarding the formation of pendant droplet. The project will also compare results from published data with experimental and numerical work, concerning the effects of key material parameters on the droplet shape. These effects include changes in heating/cooling rates, solidification/melting progression and separation/break-up times. From these tests, a set of objectives is prepared, with an intention of improving quality, stability and productivity in modelling metal welding and 3D printing.Keywords: computer modelling, droplet formation, material distortion, materials forming, welding
Procedia PDF Downloads 2862429 Evaluation of Short-Term Load Forecasting Techniques Applied for Smart Micro-Grids
Authors: Xiaolei Hu, Enrico Ferrera, Riccardo Tomasi, Claudio Pastrone
Abstract:
Load Forecasting plays a key role in making today's and future's Smart Energy Grids sustainable and reliable. Accurate power consumption prediction allows utilities to organize in advance their resources or to execute Demand Response strategies more effectively, which enables several features such as higher sustainability, better quality of service, and affordable electricity tariffs. It is easy yet effective to apply Load Forecasting at larger geographic scale, i.e. Smart Micro Grids, wherein the lower available grid flexibility makes accurate prediction more critical in Demand Response applications. This paper analyses the application of short-term load forecasting in a concrete scenario, proposed within the EU-funded GreenCom project, which collect load data from single loads and households belonging to a Smart Micro Grid. Three short-term load forecasting techniques, i.e. linear regression, artificial neural networks, and radial basis function network, are considered, compared, and evaluated through absolute forecast errors and training time. The influence of weather conditions in Load Forecasting is also evaluated. A new definition of Gain is introduced in this paper, which innovatively serves as an indicator of short-term prediction capabilities of time spam consistency. Two models, 24- and 1-hour-ahead forecasting, are built to comprehensively compare these three techniques.Keywords: short-term load forecasting, smart micro grid, linear regression, artificial neural networks, radial basis function network, gain
Procedia PDF Downloads 4692428 The Application of a Neural Network in the Reworking of Accu-Chek to Wrist Bands to Monitor Blood Glucose in the Human Body
Authors: J. K Adedeji, O. H Olowomofe, C. O Alo, S.T Ijatuyi
Abstract:
The issue of high blood sugar level, the effects of which might end up as diabetes mellitus, is now becoming a rampant cardiovascular disorder in our community. In recent times, a lack of awareness among most people makes this disease a silent killer. The situation calls for urgency, hence the need to design a device that serves as a monitoring tool such as a wrist watch to give an alert of the danger a head of time to those living with high blood glucose, as well as to introduce a mechanism for checks and balances. The neural network architecture assumed 8-15-10 configuration with eight neurons at the input stage including a bias, 15 neurons at the hidden layer at the processing stage, and 10 neurons at the output stage indicating likely symptoms cases. The inputs are formed using the exclusive OR (XOR), with the expectation of getting an XOR output as the threshold value for diabetic symptom cases. The neural algorithm is coded in Java language with 1000 epoch runs to bring the errors into the barest minimum. The internal circuitry of the device comprises the compatible hardware requirement that matches the nature of each of the input neurons. The light emitting diodes (LED) of red, green, and yellow colors are used as the output for the neural network to show pattern recognition for severe cases, pre-hypertensive cases and normal without the traces of diabetes mellitus. The research concluded that neural network is an efficient Accu-Chek design tool for the proper monitoring of high glucose levels than the conventional methods of carrying out blood test.Keywords: Accu-Check, diabetes, neural network, pattern recognition
Procedia PDF Downloads 1472427 Thermal Performance of an Air Heating Storing System
Authors: Mohammed A. Elhaj, Jamal S. Yassin
Abstract:
Owing to the lack of synchronization between the solar energy availability and the heat demands in a specific application, the energy storing sub-system is necessary to maintain the continuity of thermal process. The present work is dealing with an active solar heating storing system in which an air solar collector is connected to storing unit where this energy is distributed and provided to the heated space in a controlled manner. The solar collector is a box type absorber where the air flows between a number of vanes attached between the collector absorber and the bottom plate. This design can improve the efficiency due to increasing the heat transfer area exposed to the flowing air, as well as the heat conduction through the metal vanes from the top absorbing surface. The storing unit is a packed bed type where the air is coming from the air collector and circulated through the bed in order to add/remove the energy through the charging / discharging processes, respectively. The major advantage of the packed bed storage is its high degree of thermal stratification. Numerical solution of the packed bed energy storage is considered through dividing the bed into a number of equal segments for the bed particles and solved the energy equation for each segment depending on the neighbor ones. The studied design and performance parameters in the developed simulation model including, particle size, void fraction, etc. The final results showed that the collector efficiency was fluctuated between 55%-61% in winter season (January) under the climatic conditions of Misurata in Libya. Maximum temperature of 52ºC is attained at the top of the bed while the lower one is 25ºC at the end of the charging process of hot air into the bed. This distribution can satisfy the required load for the most house heating in Libya.Keywords: solar energy, thermal process, performance, collector, packed bed, numerical analysis, simulation
Procedia PDF Downloads 3312426 Land Suitability Scaling and Modeling for Assessing Crop Suitability in Some New Reclaimed Areas, Egypt
Authors: W. A. M. Abdel Kawy, Kh. M. Darwish
Abstract:
Adequate land use selection is an essential step towards achieving sustainable development. The main object of this study is to develop a new scale for land suitability system, which can be compatible with the local conditions. Furthermore, it aims to adapt the conventional land suitability systems to match the actual environmental status in term of soil types, climate and other conditions to evaluate land suitability for newly reclaimed areas. The new system suggests calculation of land suitability considering 20 factors affecting crop selection grouping into five categories; crop-agronomic, land management, development, environmental conditions and socio – economic status. Each factor is summed by each other to calculate the total points. The highest rating for each factor indicates the highest preference for the evaluated crop. The highest rated crops for each group are those with the highest points for the actual suitability. This study was conducted to assess the application efficiency of the new land suitability scale in recently reclaimed sites in Egypt. Moreover, 35 representative soil profiles were examined, and soil samples were subjected to some physical and chemical analysis. Actual and potential suitabilities were calculated by using the new land suitability scale. Finally, the obtained results confirmed the applicability of a new land suitability system to recommend the most promising crop rotation that can be applied in the study areas. The outputs of this research revealed that the integration of different aspects for modeling and adapting a proposed model provides an effective and flexible technique, which contribute to improve land suitability assessment for several crops to be more accurate and reliable.Keywords: analytic hierarchy process, land suitability, multi-criteria analysis, new reclaimed areas, soil parameters
Procedia PDF Downloads 1382425 Culturally Responsive School Leadership in Indigenous Schools in Malaysia
Authors: Nalini Murugaiyah
Abstract:
Indigenous students require a positive school environment where meaningful learning ought to be there to minimise myriad challenges. Therefore, Orang Asli student’s school environment should be culturally responsive and equipped with student-centred activities or provide constructively designed curriculum and pedagogy. This study sought to extend the knowledge of culturally responsive school leadership practises which relevant and responsive to Orang Asli students through th lens of a theoretical framework, Culturally Responsive School Leadership. The aim of the proposed study is to examine and understand the real-world application of leadership practices that are relevant and responsive to Orang Asli students in Malaysia. This study will also include the often-voiceless voices’ of Orang Asli students, parents, and community leaders to gain a deeper understanding of the process and experience of engaging in culturally responsive school leadership. The study will explore the differences between school leaders, teachers, parents, and community leaders in relation to culturally responsive school environment, non-Orang Asli school leaders’ and teachers’ support to the needs of Orang Asli children, children’s perspectives of teachers’ practices in the classroom align with their culture; and, the demonstration of teacher’s culturally responsive behaviour in the classroom. A basic qualitative study is the proposed research design for this study, and the data is collected through semi-structured interviews and focus group interviews. This qualitative research is designed to gain in-depth knowledge about how the principal’s leadership is culturally responsive towards the school environment, which will improve the quality of education received by the Orang Asli community in Malaysia, hence reducing the drop-out rates in Orang Asli students.Keywords: indigenous leadership, equity, inclusion, policy
Procedia PDF Downloads 672424 Novel Bioinspired Design to Capture Smoky CO2 by Reactive Absorption with Aqueous Scrubber
Authors: J. E. O. Hernandez
Abstract:
In the next 20 years, energy production by burning fuels will increase and so will the atmospheric concentration of CO2 and its well-known threats to life on Earth. The technologies available for capturing CO2 are still dubious and this keeps fostering an interest in bio-inspired approaches. The leading one is the application of carbonic anhydrase (CA) –a superfast biocatalyst able to convert up to one million molecules of CO2 into carbonates in water. However, natural CA underperforms when applied to real smoky CO2 in chimneys and, so far, the efforts to create superior CAs in the lab rely on screening methods running under pristine conditions at the micro level, which are far from resembling those in chimneys. For the evolution of man-made enzymes, selection rather than screening would be ideal but this is challenging because of the need for a suitable artificial environment that is also sustainable for our society. Herein we present the stepwise design and construction of a bioprocess (from bench-scale to semi-pilot) for evolutionary selection experiments. In this bioprocess, reaction and adsorption took place simultaneously at atmospheric pressure in a spray tower. The scrubbing solution was fed countercurrently by reusing municipal pressure and it was mainly prepared with water, carbonic anhydrase and calcium chloride. This bioprocess allowed for the enzymatic carbonation of smoky CO2; the reuse of process water and the recovery of solid carbonates without cooling of smoke, pretreatments, solvent amines and compression of CO2. The average yield of solid carbonates was 0.54 g min-1 or 12-fold the amount produced in serum bottles at lab bench scale. This bioprocess could be used as a tailor-made environment for driving the selection of superior CAs. The bioprocess and its match CA could be sustainably used to reduce global warming by CO2 emissions from exhausts.Keywords: biological carbon capture and sequestration, carbonic anhydrase, directed evolution, global warming
Procedia PDF Downloads 1932423 Time-Domain Expressions for Bridge Self-Excited Aerodynamic Forces by Modified Particle Swarm Optimizer
Authors: Hao-Su Liu, Jun-Qing Lei
Abstract:
This study introduces the theory of modified particle swarm optimizer and its application in time-domain expressions for bridge self-excited aerodynamic forces. Based on the indicial function expression and the rational function expression in time-domain expression for bridge self-excited aerodynamic forces, the characteristics of the two methods, i.e. the modified particle swarm optimizer and conventional search method, are compared in flutter derivatives’ fitting process. Theoretical analysis and numerical results indicate that adopting whether the indicial function expression or the rational function expression, the fitting flutter derivatives obtained by modified particle swarm optimizer have better goodness of fit with ones obtained from experiment. As to the flutter derivatives which have higher nonlinearity, the self-excited aerodynamic forces, using the flutter derivatives obtained through modified particle swarm optimizer fitting process, are much closer to the ones simulated by the experimental. The modified particle swarm optimizer was used to recognize the parameters of time-domain expressions for flutter derivatives of an actual long-span highway-railway truss bridge with double decks at the wind attack angle of 0°, -3° and +3°. It was found that this method could solve the bounded problems of attenuation coefficient effectively in conventional search method, and had the ability of searching in unboundedly area. Accordingly, this study provides a method for engineering industry to frequently and efficiently obtain the time-domain expressions for bridge self-excited aerodynamic forces.Keywords: time-domain expressions, bridge self-excited aerodynamic forces, modified particle swarm optimizer, long-span highway-railway truss bridge
Procedia PDF Downloads 3142422 Creating a Safe Learning Environment Based on the Experiences and Perceptions of a Millennial Generation
Authors: E. Kempen, M. J. Labuschagne, M. P. Jama
Abstract:
There is evidence that any learning experience should happen in a safe learning environment as students then will interact, experiment, and construct new knowledge. However, little is known about the specific elements required to create a safe learning environment for the millennial generation, especially in optometry education. This study aimed to identify the specific elements that will contribute to a safe learning environment for the millennial generation of optometry students. Methods: An intrinsic qualitative case study was undertaken with undergraduate students from the Department of Optometry at the University of the Free State, South Africa. An open-ended questionnaire survey was completed after the application of nine different teaching-learning methods based on the experiential learning cycle. A total number of 307 questionnaires were analyzed. Two focus group interviews were also conducted to provide additional data to supplement the data and ensure the triangulation of data. Results: Important elements based on the opinions, feelings, and perceptions of student respondents were analyzed. Students feel safe in an environment with which they are familiar, and when they are familiar with each other, the educators, and the surroundings. Small-group learning also creates a safe and familiar environment. Both these elements create an environment where they feel safe to ask questions. Students value an environment where they are able to learn without influencing their marks or disadvantaging the patients. They enjoy learning from their peers, but also need personal contact with educators. Elements such as consistency and an achievable objective also were also analyzed. Conclusion: The findings suggest that to respond to the real need of this generation of students, insight must be gained in students’ perceptions to identify their needs and the learning environment to optimize learning pedagogies. With the implementation of these personalized elements, optometry students will be able to take responsibility and accountability for their learning.Keywords: experiences and perceptions, safe learning environment, millennial generation, recommendation for optometry education
Procedia PDF Downloads 1372421 Application of a Universal Distortion Correction Method in Stereo-Based Digital Image Correlation Measurement
Authors: Hu Zhenxing, Gao Jianxin
Abstract:
Stereo-based digital image correlation (also referred to as three-dimensional (3D) digital image correlation (DIC)) is a technique for both 3D shape and surface deformation measurement of a component, which has found increasing applications in academia and industries. The accuracy of the reconstructed coordinate depends on many factors such as configuration of the setup, stereo-matching, distortion, etc. Most of these factors have been investigated in literature. For instance, the configuration of a binocular vision system determines the systematic errors. The stereo-matching errors depend on the speckle quality and the matching algorithm, which can only be controlled in a limited range. And the distortion is non-linear particularly in a complex imaging acquisition system. Thus, the distortion correction should be carefully considered. Moreover, the distortion function is difficult to formulate in a complex imaging acquisition system using conventional models in such cases where microscopes and other complex lenses are involved. The errors of the distortion correction will propagate to the reconstructed 3D coordinates. To address the problem, an accurate mapping method based on 2D B-spline functions is proposed in this study. The mapping functions are used to convert the distorted coordinates into an ideal plane without distortions. This approach is suitable for any image acquisition distortion models. It is used as a prior process to convert the distorted coordinate to an ideal position, which enables the camera to conform to the pin-hole model. A procedure of this approach is presented for stereo-based DIC. Using 3D speckle image generation, numerical simulations were carried out to compare the accuracy of both the conventional method and the proposed approach.Keywords: distortion, stereo-based digital image correlation, b-spline, 3D, 2D
Procedia PDF Downloads 4982420 Back Extraction and Isolation of Alkaloids from Ionic Liquid-Based Extracts
Authors: Rozalina Keremedchieva, Ivan Svinyarov, Milen G. Bogdanov
Abstract:
In continuation of a research project on the application of ionic liquids (ILs) as an alternative to the conventional organic solvents used in the recovery of value added chemicals of industrial interest1-3 we developed a procedure for back extraction and isolation in pure form of the biologically active alkaloid glaucine from IL-based aqueous solutions. One of the approaches applied was the formation of two-phase systems (IL-ATPS) by the addition of kosmotropic salts to the plant extract. The ability of the salts (Na2CO3, MgSO4, (NH4)2SO4, NaH2PO4) to induce the formation of two-phase systems and the influence of pH value on the partition coefficients of glaucine was comprehensively studied. As a result, it was found that the target alkaloid is preferably partitioned into the IL-rich phase regardless of the pH value of the medium and thus shows the inapplicability of the approach used for the isolation of the target compound from the ionic liquid. However, the results obtained can be used as a platform for the development of an analytical method for the quantitative determination of low concentrations of glaucine in biological samples. We further examined the ability of a series of organic solvents such as diethyl ether, Tert-butylmethyl ether, ethyl acetate, butyl acetate, toluene, chloroform, dichloromethane to recover glaucine form raw IL-based aqueous extracts. Optimal conditions for quantitative extraction of glaucine into chloroform were found from which, after removal of the solvent and subsequent recrystallization from ethanol, the target compound was isolated in a high purity as a hydrobromide salt – The form in which it entrance as an active ingredient in various medicines.Keywords: natural products, ionic liquids, solid-liquid extraction, liquid-liquid extraction
Procedia PDF Downloads 4772419 The Influence of Microcapsulated Phase Change Materials on Thermal Performance of Geopolymer Concrete
Authors: Vinh Duy Cao, Shima Pilehvar, Anna M. Szczotok, Anna-Lena Kjøniksen
Abstract:
The total energy consumption is dramatically increasing on over the world, especially for building energy consumption where a significant proportion of energy is used for heating and cooling purposes. One of the solutions to reduce the energy consumption for the building is to improve construction techniques and enhance material technology. Recently, microcapsulated phase change materials (MPCM) with high energy storage capacity within the phase transition temperature of the materials is a potential method to conserve and save energy. A new composite materials with high energy storage capacity by mixing MPCM into concrete for passive building technology is the promising candidate to reduce the energy consumption. One of the most untilized building materials for mixing with MPCM is Portland cement concrete. However, the emission of carbon dioxide (CO2) due to producing cement which plays the important role in the global warming is the main drawback of PCC. Accordingly, an environmentally friendly building material, geopolymer, which is synthesized by the reaction between the industrial waste material (aluminosilicate) and a strong alkali activator, is a potential materials to mixing with MPCM. Especially, the effect of MPCM on the thermal and mechanical properties of geopolymer concrete (GPC) is very limited. In this study, high thermal energy storage capacity materials were fabricated by mixing MPCM into geopolymer concrete. This article would investigate the effect of MPCM concentration on thermal and mechanical properties of GPC. The target is to balance the effect of MPCM on improving the thermal performance and maintaining the compressive strength of the geopolymer concrete at an acceptable level for building application.Keywords: microencapsulated phase change materials, geopolymer concrete, energy storage capacity, thermal performance
Procedia PDF Downloads 3092418 Chitosan Stabilized Oil-in-Water Pickering Emulsion Optimized for Food-Grade Application
Authors: Ankit Patil, Tushar D. Deshpande, Yogesh M. Nimdeo
Abstract:
Pickering emulsions (PE) were developed in response to increased demand for organic, eco-friendly, and biocompatible products. These emulsions are usually stabilized by solid particles. In this research, we created chitosan-based sunflower oil-in-water (O/W) PE without the need for a surfactant. In our work, we employed chitosan, a biopolymer derived from chitin, as a stabilizer. This decision was influenced by chitosan's biocompatibility and biodegradability, as well as its anti-inflammatory and antibacterial capabilities. It also has other functional properties, such as antioxidant activity, a probiotic delivery mechanism, and the ability to encapsulate bioactive compounds. The purpose of this study was to govern key parameters that can be changed to obtain stable PE, such as the concentration of chitosan (0.3-0.5 wt.%), the concentration of oil (0.8-1 vol%), the pH of the emulsion (3-7) manipulated by the addition of 1M HCl/ 4M NaOH, and the amount of electrolyte (NaCl-0-300mM) added to increase or decrease ionic strength. A careful combination of these properties resulted in the production of the most stable and optimal PE. Particle size study found that emulsions with pH 6, 0.4% chitosan, and 300 mM salts were exceptionally stable, with droplet size 886 nm, PI of 0.1702, and zeta potential of 32.753.83 mV. It is fair to infer that when ionic strength rises, particle size, zeta potential, and PI value decrease. A lower PI value suggests that emulsion nanoparticles are more homogeneous. The addition of sodium chloride increases the ionic strength of the emulsion, facilitating the formation of more compact and ordered particle layers. These findings provide light on the creation of stimulus-responsive chitosan-based PE capable of encapsulating bioactive materials, functioning as antioxidants, and serving as food-grade emulsifiers.Keywords: pickering emulsion, biocompatibility, eco-friendly, chitosan
Procedia PDF Downloads 2382417 Investigating the Influence of Critical Thinking Skills on Learning Achievement among Higher Education Students in Foreign Language Programs
Authors: Mostafa Fanaei, Shahram R. Sistani, Athare Nazri-Panjaki
Abstract:
Introduction: Critical thinking skills are increasingly recognized as vital for academic success, particularly in higher education. This study examines the influence of critical thinking on learning achievement among undergraduate and master's students enrolled in foreign language programs. By investigating this correlation, educators can gain valuable insights into optimizing teaching methodologies and enhancing academic outcomes. Methods: This cross-sectional study involved 150 students from the Shahid Bahonar University of Kerman, recruited via random sampling. Participants completed the Critical Thinking Questionnaire (CThQ), assessing dimensions such as analysis, evaluation, creation, remembering, understanding, and application. Academic performance was measured using the students' GPA (0-20). Results: The participants' mean age was 21.46 ± 5.2 years, with 62.15% being female. The mean scores for critical thinking subscales were as follows: Analyzing (13.2 ± 3.5), Evaluating (12.8 ± 3.4), Creating (18.6 ± 4.8), Remembering (9.4 ± 2.1), Understanding (12.9 ± 3.3), and Applying (12.5 ± 3.2). The overall critical thinking score was 79.4 ± 18.1, and the average GPA was 15.7 ± 2.4. Significant positive correlations were found between GPA and several critical thinking subscales: Analyzing (r = 0.45, p = 0.013), Creating (r = 0.52, p < 0.001), Remembering (r = 0.29, p = 0.021), Understanding (r = 0.41, p = 0.002), and the overall CThQ score (r = 0.54, p = 0.043). Conclusion: The study demonstrates a significant positive relationship between critical thinking skills and learning achievement in foreign language programs. Enhancing critical thinking skills through educational interventions could potentially improve academic performance. Further research is recommended to explore the underlying mechanisms and long-term impacts of critical thinking on academic success.Keywords: critical thinking, learning achievement, higher education, foreign language programs, student success
Procedia PDF Downloads 382416 Compact LWIR Borescope Sensor for Thermal Imaging of 2D Surface Temperature in Gas-Turbine Engines
Authors: Andy Zhang, Awnik Roy, Trevor B. Chen, Bibik Oleksandar, Subodh Adhikari, Paul S. Hsu
Abstract:
The durability of a combustor in gas-turbine engines is a strong function of its component temperatures and requires good control of these temperatures. Since the temperature of combustion gases frequently exceeds the melting point of the combustion liner walls, an efficient air-cooling system with optimized flow rates of cooling air is significantly important to elongate the lifetime of liner walls. To determine the effectiveness of the air-cooling system, accurate two-dimensional (2D) surface temperature measurement of combustor liner walls is crucial for advanced engine development. Traditional diagnostic techniques for temperature measurement in this application include the rmocouples, thermal wall paints, pyrometry, and phosphors. They have shown some disadvantages, including being intrusive and affecting local flame/flow dynamics, potential flame quenching, and physical damages to instrumentation due to harsh environments inside the combustor and strong optical interference from strong combustion emission in UV-Mid IR wavelength. To overcome these drawbacks, a compact and small borescope long-wave-infrared (LWIR) sensor is developed to achieve 2D high-spatial resolution, high-fidelity thermal imaging of 2D surface temperature in gas-turbine engines, providing the desired engine component temperature distribution. The compactLWIRborescope sensor makes it feasible to promote the durability of a combustor in gas-turbine engines and, furthermore, to develop more advanced gas-turbine engines.Keywords: borescope, engine, low-wave-infrared, sensor
Procedia PDF Downloads 1342415 Risk Issues for Controlling Floods through Unsafe, Dual Purpose, Gated Dams
Authors: Gregory Michael McMahon
Abstract:
Risk management for the purposes of minimizing the damages from the operations of dams has met with opposition emerging from organisations and authorities, and their practitioners. It appears that the cause may be a misunderstanding of risk management arising from exchanges that mix deterministic thinking with risk-centric thinking and that do not separate uncertainty from reliability and accuracy from probability. This paper sets out those misunderstandings that arose from dam operations at Wivenhoe in 2011, using a comparison of outcomes that have been based on the methodology and its rules and those that have been operated by applying misunderstandings of the rules. The paper addresses the performance of one risk-centric Flood Manual for Wivenhoe Dam in achieving a risk management outcome. A mixture of engineering, administrative, and legal factors appear to have combined to reduce the outcomes from the risk approach. These are described. The findings are that a risk-centric Manual may need to assist administrations in the conduct of scenario training regimes, in responding to healthy audit reporting, and in the development of decision-support systems. The principal assistance needed from the Manual, however, is to assist engineering and the law to a good understanding of how risks are managed – do not assume that risk management is understood. The wider findings are that the critical profession for decision-making downstream of the meteorologist is not dam engineering or hydrology, or hydraulics; it is risk management. Risk management will provide the minimum flood damage outcome where actual rainfalls match or exceed forecasts of rainfalls, that therefore risk management will provide the best approach for the likely history of flooding in the life of a dam, and provisions made for worst cases may be state of the art in risk management. The principal conclusion is the need for training in both risk management as a discipline and also in the application of risk management rules to particular dam operational scenarios.Keywords: risk management, flood control, dam operations, deterministic thinking
Procedia PDF Downloads 872414 Barriers and Enablers to Public Innovation in the Central Region of Colombia: A Characterization from Measurement through the Item Response Methodology and Comparative Analysis
Authors: Yessenia Parrado, Ana Barbosa, Daniela Mahe, Sebastian Toro, Jhon Garcia
Abstract:
The purpose of this work is to present the identification and characterization of the barriers and enablers to public innovation in the Central Region of Colombia from a mixed methodology in a research carried out in 2020 by the Laboratory of Innovation, Creativity and New Technologies of the National University of Colombia in alliance with the National Planning Department. Based on the research, the index of barriers to regional and departmental public innovation was built, which reflects the level of difficulty of the territorial entities to overcome the barriers present around three dimensions: organizational structure of the entity, generation of public value, and governance processes. The index was built from the item response methodology and the multiple correspondence analysis from the application of an institutional information form for public entities and a perception form for public servants. This investigation had the participation of 36 entities and 1038 servers and servants from the departments of Huila, Meta, Boyacá, Cundinamarca, Tolima, and the Capital District. In this exercise, it was identified that the departmental indices range between 13 and 44 and that the regional index was 30 out of 100. From the analysis of the information, it was possible to establish that the main barriers are the lack of specialized agencies for public innovation exercises, lack of qualified personnel and work methodologies for public innovation, inadequate information management, lack of feedback between the learning from governmental and non-governmental entities, the inability of the initiatives to generate binding participation mechanisms and the lack of qualification of citizens to participate in these processes.Keywords: item response, public innovation, quantitative analysis, compared analysis
Procedia PDF Downloads 1252413 Teachers’ Perceptions Related to the Guiding Skills within the Application Courses
Authors: Tanimola Kazeem Abiodun
Abstract:
In Nigeria, both formal education and distance learning opportunities are used in teacher training. Practical courses aim to improve the skills of teacher candidates in a school environment. Teacher candidates attend kindergarten classes under the supervision of a teacher. In this context, the guiding skills of teachers gain importance in terms of shaping candidates’ perceptions about teaching profession. In this study, the teachers’ perceptions related to the guiding skills within the practical courses were determined. Also, the perceptions and applications related to guiding skills were compared. A Likert scale questionnaire and an open-ended question were used to determine perceptions and applications. 120 questionnaires were taken into consideration and analyses of data were performed by using percentage distribution and QSR Nvivo 8 program. In this study, statements related to teachers’ perceptions about the guiding skills were asked and it is determined that almost all the teachers agreed about the importance of these statements. On the other hand, how these guidance skills are applied by teachers is also queried with an open-ended question. Finally, thoughts and applications related to guidance skills were compared to each other. Based on this comparison, it is seen that there are some differences between the thoughts and applications especially related with time management, planning, feedbacks, curriculum, workload, rules and guidance. It can be said that some guidance skills cannot be controlled only by teachers. For example, candidates’ motivation, attention, population and educational environment are also determinative factors for effective guidance. In summary, it is necessary to have prior conditions for teachers to apply these idealized guidance skills for training more successful candidates to pre-school education era. At this point, organization of practical courses by the faculties gains importance and in this context it is crucial for faculties to revise their applications based on more detailed researches.Keywords: teacher training, guiding skills, education, practical courses
Procedia PDF Downloads 4472412 Numerical Simulation of Large-Scale Landslide-Generated Impulse Waves With a Soil‒Water Coupling Smooth Particle Hydrodynamics Model
Authors: Can Huang, Xiaoliang Wang, Qingquan Liu
Abstract:
Soil‒water coupling is an important process in landslide-generated impulse waves (LGIW) problems, accompanied by large deformation of soil, strong interface coupling and three-dimensional effect. A meshless particle method, smooth particle hydrodynamics (SPH) has great advantages in dealing with complex interface and multiphase coupling problems. This study presents an improved soil‒water coupled model to simulate LGIW problems based on an open source code DualSPHysics (v4.0). Aiming to solve the low efficiency problem in modeling real large-scale LGIW problems, graphics processing unit (GPU) acceleration technology is implemented into this code. An experimental example, subaerial landslide-generated water waves, is simulated to demonstrate the accuracy of this model. Then, the Huangtian LGIW, a real large-scale LGIW problem is modeled to reproduce the entire disaster chain, including landslide dynamics, fluid‒solid interaction, and surge wave generation. The convergence analysis shows that a particle distance of 5.0 m can provide a converged landslide deposit and surge wave for this example. Numerical simulation results are in good agreement with the limited field survey data. The application example of the Huangtian LGIW provides a typical reference for large-scale LGIW assessments, which can provide reliable information on landslide dynamics, interface coupling behavior, and surge wave characteristics.Keywords: soil‒water coupling, landslide-generated impulse wave, large-scale, SPH
Procedia PDF Downloads 642411 Telemedicine in Physician Assistant Education: A Partnership with Community Agency
Authors: Martina I. Reinhold, Theresa Bacon-Baguley
Abstract:
A core challenge of physician assistant education is preparing professionals for lifelong learning. While this conventionally has encompassed scientific advances, students must also embrace new care delivery models and technologies. Telemedicine, the provision of care via two-way audio and video, is an example of a technological advance reforming health care. During a three-semester sequence of Hospital Community Experiences, physician assistant students were assigned experiences with Answer Health on Demand, a telemedicine collaborative. Preceding the experiences, the agency lectured on the application of telemedicine. Students were then introduced to the technology and partnered with a provider. Prior to observing the patient-provider interaction, patient consent was obtained. Afterwards, students completed a reflection paper on lessons learned and the potential impact of telemedicine on their careers. Thematic analysis was completed on the students’ reflection papers (n=13). Preceding the lecture and experience, over 75% of students (10/13) were unaware of telemedicine. Several stated they were 'skeptical' about the effectiveness of 'impersonal' health care appointments. After the experience, all students remarked that telemedicine will play a large role in the future of healthcare and will provide benefits by improving access in rural areas, decreasing wait time, and saving cost. More importantly, 30% of students (4/13) commented that telemedicine is a technology they can see themselves using in their future practice. Initial results indicate that collaborative interaction between students and telemedicine providers enhanced student learning and exposed students to technological advances in the delivery of care. Further, results indicate that students perceived telemedicine more favorably as a viable delivery method after the experience.Keywords: collaboration, physician assistant education, teaching innovative health care delivery method, telemedicine
Procedia PDF Downloads 1972410 Effects of Poor Job Performance Practices on the Job Satisfaction of Workers
Authors: Prakash Singh, Thembinkosi Twalo
Abstract:
The sustainability of the Buffalo City Metropolitan Municipality (BCMM), in South Africa, is being threatened by the reported cases of poor administration, weak management of resources, inappropriate job performance, and inappropriate job behaviour of some of the workers. Since the structural-functionalists assume that formal education is a solution to societal challenges, it therefore means that the BCMM should not be experiencing this threat since many of its workers have various levels of formal education. Consequently, this study using the mixed method research approach, set out to investigate the paradoxical co-existence of inappropriate job behaviour and performance with formal education at the BCMM. Considering the impact of human factors in the labour process, this study draws attention to the divergent objectives of skill and skill bearer, with the application of knowledge subject to the knowledge bearer’s motives, will, attitudes, ethics and values. Consequently, inappropriate job behaviour and performance practices could be due to numerous factors such as lack of the necessary capabilities or refusal to apply what has been learnt due to racial or other prejudices. The role of the human factor in the labour process is a serious omission in human capital theory, which regards schooling as the only factor contributing to the ability to do a job. For this reason this study’s theoretical framework is an amalgamation of the four theories - human capital, social capital, cultural capital, and reputation capital – in an effort to obtain a broader view of the factors that shape job behaviour and performance. Since it has been established that human nature plays a crucial role in how workers undertake their responsibilities, it is important that this be taken into consideration in the BCMM’s monitoring and evaluation of the workers’ job performance practices. Hence, this exploratory study brings to the fore, the effects of poor job performance practices on the job satisfaction of workers.Keywords: human capital, poor job performance practices, service delivery, workers’ job satisfaction
Procedia PDF Downloads 2982409 The Design Method of Artificial Intelligence Learning Picture: A Case Study of DCAI's New Teaching
Authors: Weichen Chang
Abstract:
To create a guided teaching method for AI generative drawing design, this paper develops a set of teaching models for AI generative drawing (DCAI), which combines learning modes such as problem-solving, thematic inquiry, phenomenon-based, task-oriented, and DFC . Through the information security AI picture book learning guided programs and content, the application of participatory action research (PAR) and interview methods to explore the dual knowledge of Context and ChatGPT (DCAI) for AI to guide the development of students' AI learning skills. In the interviews, the students highlighted five main learning outcomes (self-study, critical thinking, knowledge generation, cognitive development, and presentation of work) as well as the challenges of implementing the model. Through the use of DCAI, students will enhance their consensus awareness of generative mapping analysis and group cooperation, and they will have knowledge that can enhance AI capabilities in DCAI inquiry and future life. From this paper, it is found that the conclusions are (1) The good use of DCAI can assist students in exploring the value of their knowledge through the power of stories and finding the meaning of knowledge communication; (2) Analyze the transformation power of the integrity and coherence of the story through the context so as to achieve the tension of ‘starting and ending’; (3) Use ChatGPT to extract inspiration, arrange story compositions, and make prompts that can communicate with people and convey emotions. Therefore, new knowledge construction methods will be one of the effective methods for AI learning in the face of artificial intelligence, providing new thinking and new expressions for interdisciplinary design and design education practice.Keywords: artificial intelligence, task-oriented, contextualization, design education
Procedia PDF Downloads 30