Search results for: measurement models
1438 Improving Digital Data Security Awareness among Teacher Candidates with Digital Storytelling Technique
Authors: Veysel Çelik, Aynur Aker, Ebru Güç
Abstract:
Developments in information and communication technologies have increased both the speed of producing information and the speed of accessing new information. Accordingly, the daily lives of individuals have started to change. New concepts such as e-mail, e-government, e-school, e-signature have emerged. For this reason, prospective teachers who will be future teachers or school administrators are expected to have a high awareness of digital data security. The aim of this study is to reveal the effect of the digital storytelling technique on the data security awareness of pre-service teachers of computer and instructional technology education departments. For this purpose, participants were selected based on the principle of volunteering among third-grade students studying at the Computer and Instructional Technologies Department of the Faculty of Education at Siirt University. In the research, the pretest/posttest half experimental research model, one of the experimental research models, was used. In this framework, a 6-week lesson plan on digital data security awareness was prepared in accordance with the digital narration technique. Students in the experimental group formed groups of 3-6 people among themselves. The groups were asked to prepare short videos or animations for digital data security awareness. The completed videos were watched and evaluated together with prospective teachers during the evaluation process, which lasted approximately 2 hours. In the research, both quantitative and qualitative data collection tools were used by using the digital data security awareness scale and the semi-structured interview form consisting of open-ended questions developed by the researchers. According to the data obtained, it was seen that the digital storytelling technique was effective in creating data security awareness and creating permanent behavior changes for computer and instructional technology students.Keywords: digital storytelling, self-regulation, digital data security, teacher candidates, self-efficacy
Procedia PDF Downloads 1261437 NLRP3-Inflammassome Participates in the Inflammatory Response Induced by Paracoccidioides brasiliensis
Authors: Eduardo Kanagushiku Pereira, Frank Gregory Cavalcante da Silva, Barbara Soares Gonçalves, Ana Lúcia Bergamasco Galastri, Ronei Luciano Mamoni
Abstract:
The inflammatory response initiates after the recognition of pathogens by receptors expressed by innate immune cells. Among these receptors, the NLRP3 was associated with the recognition of pathogenic fungi in experimental models. NLRP3 operates forming a multiproteic complex called inflammasome, which actives caspase-1, responsible for the production of the inflammatory cytokines IL-1beta and IL-18. In this study, we aimed to investigate the involvement of NLRP3 in the inflammatory response elicited in macrophages against Paracoccidioides brasiliensis (Pb), the etiologic agent of PCM. Macrophages were differentiated from THP-1 cells by treatment with phorbol-myristate-acetate. Following differentiation, macrophages were stimulated by Pb yeast cells for 24 hours, after previous treatment with specific NLRP3 (3,4-methylenedioxy-beta-nitrostyrene) and/or caspase-1 (VX-765) inhibitors, or specific inhibitors of pathways involved in NLRP3 activation such as: Reactive Oxigen Species (ROS) production (N-Acetyl-L-cysteine), K+ efflux (Glibenclamide) or phagossome acidification (Bafilomycin). Quantification of IL-1beta and IL-18 in supernatants was performed by ELISA. Our results showed that the production of IL-1beta and IL-18 by THP-1-derived-macrophages stimulated with Pb yeast cells was dependent on NLRP3 and caspase-1 activation, once the presence of their specific inhibitors diminished the production of these cytokines. Furthermore, we found that the major pathways involved in NLRP3 activation, after Pb recognition, were dependent on ROS production and K+ efflux. In conclusion, our results showed that NLRP3 participates in the recognition of Pb yeast cells by macrophages, leading to the activation of the NLRP3-inflammasome and production of IL-1beta and IL-18. Together, these cytokines can induce an inflammatory response against P. brasiliensis, essential for the establishment of the initial inflammatory response and for the development of the subsequent acquired immune response.Keywords: inflammation, IL-1beta, IL-18, NLRP3, Paracoccidioidomycosis
Procedia PDF Downloads 2731436 Methodology to Achieve Non-Cooperative Target Identification Using High Resolution Range Profiles
Authors: Olga Hernán-Vega, Patricia López-Rodríguez, David Escot-Bocanegra, Raúl Fernández-Recio, Ignacio Bravo
Abstract:
Non-Cooperative Target Identification has become a key research domain in the Defense industry since it provides the ability to recognize targets at long distance and under any weather condition. High Resolution Range Profiles, one-dimensional radar images where the reflectivity of a target is projected onto the radar line of sight, are widely used for identification of flying targets. According to that, to face this problem, an approach to Non-Cooperative Target Identification based on the exploitation of Singular Value Decomposition to a matrix of range profiles is presented. Target Identification based on one-dimensional radar images compares a collection of profiles of a given target, namely test set, with the profiles included in a pre-loaded database, namely training set. The classification is improved by using Singular Value Decomposition since it allows to model each aircraft as a subspace and to accomplish recognition in a transformed domain where the main features are easier to extract hence, reducing unwanted information such as noise. Singular Value Decomposition permits to define a signal subspace which contain the highest percentage of the energy, and a noise subspace which will be discarded. This way, only the valuable information of each target is used in the recognition process. The identification algorithm is based on finding the target that minimizes the angle between subspaces and takes place in a transformed domain. Two metrics, F1 and F2, based on Singular Value Decomposition are accomplished in the identification process. In the case of F2, the angle is weighted, since the top vectors set the importance in the contribution to the formation of a target signal, on the contrary F1 simply shows the evolution of the unweighted angle. In order to have a wide database or radar signatures and evaluate the performance, range profiles are obtained through numerical simulation of seven civil aircraft at defined trajectories taken from an actual measurement. Taking into account the nature of the datasets, the main drawback of using simulated profiles instead of actual measured profiles is that the former implies an ideal identification scenario, since measured profiles suffer from noise, clutter and other unwanted information and simulated profiles don't. In this case, the test and training samples have similar nature and usually a similar high signal-to-noise ratio, so as to assess the feasibility of the approach, the addition of noise has been considered before the creation of the test set. The identification results applying the unweighted and weighted metrics are analysed for demonstrating which algorithm provides the best robustness against noise in an actual possible scenario. So as to confirm the validity of the methodology, identification experiments of profiles coming from electromagnetic simulations are conducted, revealing promising results. Considering the dissimilarities between the test and training sets when noise is added, the recognition performance has been improved when weighting is applied. Future experiments with larger sets are expected to be conducted with the aim of finally using actual profiles as test sets in a real hostile situation.Keywords: HRRP, NCTI, simulated/synthetic database, SVD
Procedia PDF Downloads 3541435 Quantitative Evaluation of Supported Catalysts Key Properties from Electron Tomography Studies: Assessing Accuracy Using Material-Realistic 3D-Models
Authors: Ainouna Bouziane
Abstract:
The ability of Electron Tomography to recover the 3D structure of catalysts, with spatial resolution in the subnanometer scale, has been widely explored and reviewed in the last decades. A variety of experimental techniques, based either on Transmission Electron Microscopy (TEM) or Scanning Transmission Electron Microscopy (STEM) have been used to reveal different features of nanostructured catalysts in 3D, but High Angle Annular Dark Field imaging in STEM mode (HAADF-STEM) stands out as the most frequently used, given its chemical sensitivity and avoidance of imaging artifacts related to diffraction phenomena when dealing with crystalline materials. In this regard, our group has developed a methodology that combines image denoising by undecimated wavelet transforms (UWT) with automated, advanced segmentation procedures and parameter selection methods using CS-TVM (Compressed Sensing-total variation minimization) algorithms to reveal more reliable quantitative information out of the 3D characterization studies. However, evaluating the accuracy of the magnitudes estimated from the segmented volumes is also an important issue that has not been properly addressed yet, because a perfectly known reference is needed. The problem particularly complicates in the case of multicomponent material systems. To tackle this key question, we have developed a methodology that incorporates volume reconstruction/segmentation methods. In particular, we have established an approach to evaluate, in quantitative terms, the accuracy of TVM reconstructions, which considers the influence of relevant experimental parameters like the range of tilt angles, image noise level or object orientation. The approach is based on the analysis of material-realistic, 3D phantoms, which include the most relevant features of the system under analysis.Keywords: electron tomography, supported catalysts, nanometrology, error assessment
Procedia PDF Downloads 881434 Graphical Theoretical Construction of Discrete time Share Price Paths from Matroid
Authors: Min Wang, Sergey Utev
Abstract:
The lessons from the 2007-09 global financial crisis have driven scientific research, which considers the design of new methodologies and financial models in the global market. The quantum mechanics approach was introduced in the unpredictable stock market modeling. One famous quantum tool is Feynman path integral method, which was used to model insurance risk by Tamturk and Utev and adapted to formalize the path-dependent option pricing by Hao and Utev. The research is based on the path-dependent calculation method, which is motivated by the Feynman path integral method. The path calculation can be studied in two ways, one way is to label, and the other is computational. Labeling is a part of the representation of objects, and generating functions can provide many different ways of representing share price paths. In this paper, the recent works on graphical theoretical construction of individual share price path via matroid is presented. Firstly, a study is done on the knowledge of matroid, relationship between lattice path matroid and Tutte polynomials and ways to connect points in the lattice path matroid and Tutte polynomials is suggested. Secondly, It is found that a general binary tree can be validly constructed from a connected lattice path matroid rather than general lattice path matroid. Lastly, it is suggested that there is a way to represent share price paths via a general binary tree, and an algorithm is developed to construct share price paths from general binary trees. A relationship is also provided between lattice integer points and Tutte polynomials of a transversal matroid. Use this way of connection together with the algorithm, a share price path can be constructed from a given connected lattice path matroid.Keywords: combinatorial construction, graphical representation, matroid, path calculation, share price, Tutte polynomial
Procedia PDF Downloads 1381433 Ethnic Identity as an Asset: Linking Ethnic Identity, Perceived Social Support, and Mental Health among Indigenous Adults in Taiwan
Authors: A.H.Y. Lai, C. Teyra
Abstract:
In Taiwan, there are 16 official indigenous groups, accounting for 2.3% of the total population. Like other indigenous populations worldwide, indigenous peoples in Taiwan have poorer mental health because of their history of oppression and colonisation. Amid the negative narratives, the ethnic identity of cultural minorities is their unique psychological and cultural asset. Moreover, positive socialisation is found to be related to strong ethnic identity. Based on Phinney’s theory on ethnic identity development and social support theory, this study adopted a strength-based approach conceptualising ethnic identity as the central organising principle that linked perceived social support and mental health among indigenous adults in Taiwan. Aims. Overall aim is to examine the effect of ethnic identity and social support on mental health. Specific aims were to examine : (1) the association between ethnic identity and mental health; (2) the association between perceived social support and mental health ; (3) the indirect effect of ethnic identity linking perceived social support and mental health. Methods. Participants were indigenous adults in Taiwan (n=200; mean age=29.51; Female=31%, Male=61%, Others=8%). A cross-sectional quantitative design was implemented using data collected in the year 2020. Respondent-driven sampling was used. Standardised measurements were: Ethnic Identity Scale(6-item); Social Support Questionnaire-SF(6 items); Patient Health Questionnaire(9-item); and Generalised Anxiety Disorder(7-item). Covariates were age, gender and economic satisfaction. A four-stage structural equation modelling (SEM) with robust maximin likelihood estimation was employed using Mplus8.0. Step 1: A measurement model was built and tested using confirmatory factor analysis (CFA). Step 2: Factor covariates were re-specified as direct effects in the SEM. Covariates were added. The direct effects of (1) ethnic identity and social support on depression and anxiety and (2) social support on ethnic identity were tested. The indirect effect of ethnic identity was examined with the bootstrapping technique. Results. The CFA model showed satisfactory fit statistics: x^2(df)=869.69(608), p<.05; Comparative ft index (CFI)/ Tucker-Lewis fit index (TLI)=0.95/0.94; root mean square error of approximation (RMSEA)=0.05; Standardized Root Mean Squared Residual (SRMR)=0.05. Ethnic identity is represented by two latent factors: ethnic identity-commitment and ethnic identity-exploration. Depression, anxiety and social support are single-factor latent variables. For the SEM, model fit statistics were: x^2(df)=779.26(527), p<.05; CFI/TLI=0.94/0.93; RMSEA=0.05; SRMR=0.05. Ethnic identity-commitment (b=-0.30) and social support (b=-0.33) had direct negative effects on depression, but ethnic identity-exploration did not. Ethnic identity-commitment (b=-0.43) and social support (b=-0.31) had direct negative effects on anxiety, while identity-exploration (b=0.24) demonstrated a positive effect. Social support had direct positive effects on ethnic identity-exploration (b=0.26) and ethnic identity-commitment (b=0.31). Mediation analysis demonstrated the indirect effect of ethnic identity-commitment linking social support and depression (b=0.22). Implications: Results underscore the role of social support in preventing depression via ethnic identity commitment among indigenous adults in Taiwan. Adopting the strength-based approach, mental health practitioners can mobilise indigenous peoples’ commitment to their group to promote their well-being.Keywords: ethnic identity, indigenous population, mental health, perceived social support
Procedia PDF Downloads 1031432 Model-Based Approach as Support for Product Industrialization: Application to an Optical Sensor
Authors: Frederic Schenker, Jonathan J. Hendriks, Gianluca Nicchiotti
Abstract:
In a product industrialization perspective, the end-product shall always be at the peak of technological advancement and developed in the shortest time possible. Thus, the constant growth of complexity and a shorter time-to-market calls for important changes on both the technical and business level. Undeniably, the common understanding of the system is beclouded by its complexity which leads to the communication gap between the engineers and the sale department. This communication link is therefore important to maintain and increase the information exchange between departments to ensure a punctual and flawless delivery to the end customer. This evolution brings engineers to reason with more hindsight and plan ahead. In this sense, they use new viewpoints to represent the data and to express the model deliverables in an understandable way that the different stakeholder may identify their needs and ideas. This article focuses on the usage of Model-Based System Engineering (MBSE) in a perspective of system industrialization and reconnect the engineering with the sales team. The modeling method used and presented in this paper concentrates on displaying as closely as possible the needs of the customer. Firstly, by providing a technical solution to the sales team to help them elaborate commercial offers without omitting technicalities. Secondly, the model simulates between a vast number of possibilities across a wide range of components. It becomes a dynamic tool for powerful analysis and optimizations. Thus, the model is no longer a technical tool for the engineers, but a way to maintain and solidify the communication between departments using different views of the model. The MBSE contribution to cost optimization during New Product Introduction (NPI) activities is made explicit through the illustration of a case study describing the support provided by system models to architectural choices during the industrialization of a novel optical sensor.Keywords: analytical model, architecture comparison, MBSE, product industrialization, SysML, system thinking
Procedia PDF Downloads 1611431 Evaluating Structural Crack Propagation Induced by Soundless Chemical Demolition Agent Using an Energy Release Rate Approach
Authors: Shyaka Eugene
Abstract:
The efficient and safe demolition of structures is a critical challenge in civil engineering and construction. This study focuses on the development of optimal demolition strategies by investigating the crack propagation behavior in beams induced by soundless cracking agents. It is commonly used in controlled demolition and has gained prominence due to its non-explosive and environmentally friendly nature. This research employs a comprehensive experimental and computational approach to analyze the crack initiation, propagation, and eventual failure in beams subjected to soundless cracking agents. Experimental testing involves the application of various cracking agents under controlled conditions to understand their effects on the structural integrity of beams. High-resolution imaging and strain measurements are used to capture the crack propagation process. In parallel, numerical simulations are conducted using advanced finite element analysis (FEA) techniques to model crack propagation in beams, considering various parameters such as cracking agent composition, loading conditions, and beam properties. The FEA models are validated against experimental results, ensuring their accuracy in predicting crack propagation patterns. The findings of this study provide valuable insights into optimizing demolition strategies, allowing engineers and demolition experts to make informed decisions regarding the selection of cracking agents, their application techniques, and structural reinforcement methods. Ultimately, this research contributes to enhancing the safety, efficiency, and sustainability of demolition practices in the construction industry, reducing environmental impact and ensuring the protection of adjacent structures and the surrounding environment.Keywords: expansion pressure, energy release rate, soundless chemical demolition agent, crack propagation
Procedia PDF Downloads 631430 Utilization of Activated Carbon for the Extraction and Separation of Methylene Blue in the Presence of Acid Yellow 61 Using an Inclusion Polymer Membrane
Authors: Saâd Oukkass, Abderrahim Bouftou, Rachid Ouchn, L. Lebrun, Miloudi Hlaibi
Abstract:
We invariably exist in a world steeped in colors, whether in our clothing, food, cosmetics, or even medications. However, most of the dyes we use pose significant problems, being both harmful to the environment and resistant to degradation. Among these dyes, methylene blue and acid yellow 61 stand out, commonly used to dye various materials such as cotton, wood, and silk. Fortunately, various methods have been developed to treat and remove these polluting dyes, among which membrane processes play a prominent role. These methods are praised for their low energy consumption, ease of operation, and their ability to achieve effective separation of components. Adsorption on activated carbon is also a widely employed technique, complementing the basic processes. It proves particularly effective in capturing and removing organic compounds from water due to its substantial specific surface area while retaining its properties unchanged. In the context of our study, we examined two crucial aspects. Firstly, we explored the possibility of selectively extracting methylene blue from a mixture containing another dye, acid yellow 61, using a polymer inclusion membrane (PIM) made of PVA. After characterizing the morphology and porosity of the membrane, we applied kinetic and thermodynamic models to determine the values of permeability (P), initial flux (J0), association constant (Kass), and apparent diffusion coefficient (D*). Subsequently, we measured activation parameters (activation energy (Ea), enthalpy (ΔH#ass), entropy (ΔS#)). Finally, we studied the effect of activated carbon on the processes carried out through the membrane, demonstrating a clear improvement. These results make the membrane developed in this study a potentially pivotal player in the field of membrane separation.Keywords: dyes, methylene blue, membrane, activated carbon
Procedia PDF Downloads 811429 Chiral Molecule Detection via Optical Rectification in Spin-Momentum Locking
Authors: Jessie Rapoza, Petr Moroshkin, Jimmy Xu
Abstract:
Chirality is omnipresent, in nature, in life, and in the field of physics. One intriguing example is the homochirality that has remained a great secret of life. Another is the pairs of mirror-image molecules – enantiomers. They are identical in atomic composition and therefore indistinguishable in the scalar physical properties. Yet, they can be either therapeutic or toxic, depending on their chirality. Recent studies suggest a potential link between abnormal levels of certain D-amino acids and some serious health impairments, including schizophrenia, amyotrophic lateral sclerosis, and potentially cancer. Although indistinguishable in their scalar properties, the chirality of a molecule reveals itself in interaction with the surrounding of a certain chirality, or more generally, a broken mirror-symmetry. In this work, we report on a system for chiral molecule detection, in which the mirror-symmetry is doubly broken, first by asymmetric structuring a nanopatterned plasmonic surface than by the incidence of circularly polarized light (CPL). In this system, the incident circularly-polarized light induces a surface plasmon polariton (SPP) wave, propagating along the asymmetric plasmonic surface. This SPP field itself is chiral, evanescently bound to a near-field zone on the surface (~10nm thick), but with an amplitude greatly intensified (by up to 104) over that of the incident light. It hence probes just the molecules on the surface instead of those in the volume. In coupling to molecules along its path on the surface, the chiral SPP wave favors one chirality over the other, allowing for chirality detection via the change in an optical rectification current measured at the edges of the sample. The asymmetrically structured surface converts the high-frequency electron plasmonic-oscillations in the SPP wave into a net DC drift current that can be measured at the edge of the sample via the mechanism of optical rectification. The measured results validate these design concepts and principles. The observed optical rectification current exhibits a clear differentiation between a pair of enantiomers. Experiments were performed by focusing a 1064nm CW laser light at the sample - a gold grating microchip submerged in an approximately 1.82M solution of either L-arabinose or D-arabinose and water. A measurement of the current output was then recorded under both rights and left circularly polarized lights. Measurements were recorded at various angles of incidence to optimize the coupling between the spin-momentums of the incident light and that of the SPP, that is, spin-momentum locking. In order to suppress the background, the values of the photocurrent for the right CPL are subtracted from those for the left CPL. Comparison between the two arabinose enantiomers reveals a preferential signal response of one enantiomer to left CPL and the other enantiomer to right CPL. In sum, this work reports on the first experimental evidence of the feasibility of chiral molecule detection via optical rectification in a metal meta-grating. This nanoscale interfaced electrical detection technology is advantageous over other detection methods due to its size, cost, ease of use, and integration ability with read-out electronic circuits for data processing and interpretation.Keywords: Chirality, detection, molecule, spin
Procedia PDF Downloads 921428 Effect of Cooking Time, Seed-To-Water Ratio and Soaking Time on the Proximate Composition and Functional Properties of Tetracarpidium conophorum (Nigerian Walnut) Seeds
Authors: J. O. Idoko, C. N. Michael, T. O. Fasuan
Abstract:
This study investigated the effects of cooking time, seed-to-water ratio and soaking time on proximate and functional properties of African walnut seed using Box-Behnken design and Response Surface Methodology (BBD-RSM) with a view to increase its utilization in the food industry. African walnut seeds were sorted washed, soaked, cooked, dehulled, sliced, dried and milled. Proximate analysis and functional properties of the samples were evaluated using standard procedures. Data obtained were analyzed using descriptive and inferential statistics. Quadratic models were obtained to predict the proximate and functional qualities as a function of cooking time, seed-to-water ratio and soaking time. The results showed that the crude protein ranged between 11.80% and 23.50%, moisture content ranged between 1.00% and 4.66%, ash content ranged between 3.35% and 5.25%, crude fibre ranged from 0.10% to 7.25% and carbohydrate ranged from 1.22% to 29.35%. The functional properties showed that soluble protein ranged from 16.26% to 42.96%, viscosity ranged from 23.43 mPas to 57 mPas, emulsifying capacity ranged from 17.14% to 39.43% and water absorption capacity ranged from 232% to 297%. An increase in the volume of water used during cooking resulted in loss of water soluble protein through leaching, the length of soaking time and the moisture content of the dried product are inversely related, ash content is inversely related to the cooking time and amount of water used, extraction of fat is enhanced by increase in soaking time while increase in cooking and soaking times result into decrease in fibre content. The results obtained indicated that African walnut could be used in several food formulations as protein supplement and binder.Keywords: African walnut, functional properties, proximate analysis, response surface methodology
Procedia PDF Downloads 3961427 Energy Consumption and Economic Growth Nexus: a Sustainability Understanding from the BRICS Economies
Authors: Smart E. Amanfo
Abstract:
Although the exact functional relationship between energy consumption and economic growth and development remains a complex social science, there is a sustained growing of agreement among energy economists and the likes on direct or indirect role of energy use in the development process, and as sustenance for many of societal achieved socio-economic and environmental developments in any economy. According to OECD, the world economy will double by 2050 in which the two members of BRICS (Brazil, Russia, India, China and South Africa) countries: China and India lead. There is a global apprehension that if countries constituting the epicenter of the present and future economic growth follow the same trajectory as during and after Industrial Revolution, involving higher energy throughputs, especially fossil fuels, the already known and models predicted threats of climate change and global warming could be exacerbated, especially in the developing economies. The international community’s challenge is how to address the trilemma of economic growth, social development, poverty eradication and stability of the ecological systems. This paper aims at providing the estimates of economic growth, energy consumption, and carbon dioxide emissions using BRICS members’ panel data from 1980 to 2017. The preliminary results based on fixed effect econometric model show positive significant relationship between energy consumption and economic growth. The paper further identified a strong relationship between economic growth and CO2 emissions which suggests that the global agenda of low-carbon-led growth and development is not a straight forward achievable The study therefore highlights the need for BRICS member states to intensify low-emissions-based production and consumption policies, increase renewables in order to avoid further deterioration of climate change impacts.Keywords: BRICS, sustainability, sustainable development, energy consumption, economic growth
Procedia PDF Downloads 961426 A Multi-Output Network with U-Net Enhanced Class Activation Map and Robust Classification Performance for Medical Imaging Analysis
Authors: Jaiden Xuan Schraut, Leon Liu, Yiqiao Yin
Abstract:
Computer vision in medical diagnosis has achieved a high level of success in diagnosing diseases with high accuracy. However, conventional classifiers that produce an image to-label result provides insufficient information for medical professionals to judge and raise concerns over the trust and reliability of a model with results that cannot be explained. In order to gain local insight into cancerous regions, separate tasks such as imaging segmentation need to be implemented to aid the doctors in treating patients, which doubles the training time and costs which renders the diagnosis system inefficient and difficult to be accepted by the public. To tackle this issue and drive AI-first medical solutions further, this paper proposes a multi-output network that follows a U-Net architecture for image segmentation output and features an additional convolutional neural networks (CNN) module for auxiliary classification output. Class activation maps are a method of providing insight into a convolutional neural network’s feature maps that leads to its classification but in the case of lung diseases, the region of interest is enhanced by U-net-assisted Class Activation Map (CAM) visualization. Therefore, our proposed model combines image segmentation models and classifiers to crop out only the lung region of a chest X-ray’s class activation map to provide a visualization that improves the explainability and is able to generate classification results simultaneously which builds trust for AI-led diagnosis systems. The proposed U-Net model achieves 97.61% accuracy and a dice coefficient of 0.97 on testing data from the COVID-QU-Ex Dataset which includes both diseased and healthy lungs.Keywords: multi-output network model, U-net, class activation map, image classification, medical imaging analysis
Procedia PDF Downloads 2031425 The Role of Two Macrophyte Species in Mineral Nutrient Cycling in Human-Impacted Water Reservoirs
Authors: Ludmila Polechonska, Agnieszka Klink
Abstract:
The biogeochemical studies of macrophytes shed light on elements bioavailability, transfer through the food webs and their possible effects on the biota, and provide a basis for their practical application in aquatic monitoring and remediation. Measuring the accumulation of elements in plants can provide time-integrated information about the presence of chemicals in aquatic ecosystems. The aim of the study was to determine and compare the contents of micro- and macroelements in two cosmopolitan macrophytes, submerged Ceratophyllum demersum (hornworth) and free-floating Hydrocharis morsus-ranae (European frog-bit), in order to assess their bioaccumulation potential, elements stock accumulated in each plant and their role in nutrients cycling in small water reservoirs. Sampling sites were designated in 25 oxbow lakes in urban areas in Lower Silesia (SW Poland). In each sampling site, fresh whole plants of C. demersum and H. morsus-ranae were collected from squares of 1x1 meters each where the species coexisted. European frog-bit was separated into leaves, stems and roots. For biomass measurement all plants growing on 1 square meter were collected, dried and weighed. At the same time, water samples were collected from each reservoir and their pH and EC were determined. Water samples were filtered and acidified and plant samples were digested in concentrated nitric acid. Next, the content of Ca, Cu, Fe, K, Mg, Mn, Ni and Zn was determined using atomic absorption method (AAS). Statistical analysis showed that C. demersum and organs of H. morsus-ranae differed significantly in respect of metals content (Kruskal-Wallis Anova, p<0.05). Contents of Cu, Mn, Ni and Zn were higher in hornwort, while European frog-bit contained more Ca, Fe, K, Mg. Bioaccumulation Factors (BCF=content in plant/concentration in water) showed similar pattern of metal bioaccumulation – microelements were more intensively accumulated by hornwort and macroelements by frog-bit. Based on BCF values both species may be positively evaluated as good accumulators of Cu, Fe, Mn, Ni and Zn. However, the distribution of metals in H. morsus-ranae was uneven – the majority of studied elements were retained in roots, which may indicate to existence of physiological barriers developed for dealing with toxicity. Some percent of Ca and K was actively transported to stems, but to leaves Mg only. Although the biomass of C. demersum was two times greater than biomass of H. morsus-ranae, the element off-take was greater only for Cu, Mn, Ni and Zn. Nevertheless, it can be stated that despite a relatively small biomass, compared to other macrophytes, both species may have an influence on the removal of trace elements from aquatic ecosystems and, as they serve as food for some animals, also on the incorporation of toxic elements into food chains. There was a significant positive correlation between content of Mn and Fe in water and roots of H. morus-ranae (R=0.51 and R=0.60, respectively) as well as between Cu concentration in water and in C. demersum (R=0.41) (Spearman rank correlation, p<0.05). High bioaccumulation rates and correlation between plants and water elements concentrations point to their possible use as passive biomonitors of aquatic pollution.Keywords: aquatic plants, bioaccumulation, biomonitoring, macroelements, phytoremediation, trace metals
Procedia PDF Downloads 1891424 A Data-Driven Agent Based Model for the Italian Economy
Authors: Michele Catalano, Jacopo Di Domenico, Luca Riccetti, Andrea Teglio
Abstract:
We develop a data-driven agent based model (ABM) for the Italian economy. We calibrate the model for the initial condition and parameters. As a preliminary step, we replicate the Monte-Carlo simulation for the Austrian economy. Then, we evaluate the dynamic properties of the model: the long-run equilibrium and the allocative efficiency in terms of disequilibrium patterns arising in the search and matching process for final goods, capital, intermediate goods, and credit markets. In this perspective, we use a randomized initial condition approach. We perform a robustness analysis perturbing the system for different parameter setups. We explore the empirical properties of the model using a rolling window forecast exercise from 2010 to 2022 to observe the model’s forecasting ability in the wake of the COVID-19 pandemic. We perform an analysis of the properties of the model with a different number of agents, that is, with different scales of the model compared to the real economy. The model generally displays transient dynamics that properly fit macroeconomic data regarding forecasting ability. We stress the model with a large set of shocks, namely interest policy, fiscal policy, and exogenous factors, such as external foreign demand for export. In this way, we can explore the most exposed sectors of the economy. Finally, we modify the technology mix of the various sectors and, consequently, the underlying input-output sectoral interdependence to stress the economy and observe the long-run projections. In this way, we can include in the model the generation of endogenous crisis due to the implied structural change, technological unemployment, and potential lack of aggregate demand creating the condition for cyclical endogenous crises reproduced in this artificial economy.Keywords: agent-based models, behavioral macro, macroeconomic forecasting, micro data
Procedia PDF Downloads 691423 Destination Decision Model for Cruising Taxis Based on Embedding Model
Authors: Kazuki Kamada, Haruka Yamashita
Abstract:
In Japan, taxi is one of the popular transportations and taxi industry is one of the big businesses. However, in recent years, there has been a difficult problem of reducing the number of taxi drivers. In the taxi business, mainly three passenger catching methods are applied. One style is "cruising" that drivers catches passengers while driving on a road. Second is "waiting" that waits passengers near by the places with many requirements for taxies such as entrances of hospitals, train stations. The third one is "dispatching" that is allocated based on the contact from the taxi company. Above all, the cruising taxi drivers need the experience and intuition for finding passengers, and it is difficult to decide "the destination for cruising". The strong recommendation system for the cruising taxies supports the new drivers to find passengers, and it can be the solution for the decreasing the number of drivers in the taxi industry. In this research, we propose a method of recommending a destination for cruising taxi drivers. On the other hand, as a machine learning technique, the embedding models that embed the high dimensional data to a low dimensional space is widely used for the data analysis, in order to represent the relationship of the meaning between the data clearly. Taxi drivers have their favorite courses based on their experiences, and the courses are different for each driver. We assume that the course of cruising taxies has meaning such as the course for finding business man passengers (go around the business area of the city of go to main stations) and course for finding traveler passengers (go around the sightseeing places or big hotels), and extract the meaning of their destinations. We analyze the cruising history data of taxis based on the embedding model and propose the recommendation system for passengers. Finally, we demonstrate the recommendation of destinations for cruising taxi drivers based on the real-world data analysis using proposing method.Keywords: taxi industry, decision making, recommendation system, embedding model
Procedia PDF Downloads 1381422 Investigations on the Influence of Web Openings on the Load Bearing Behavior of Steel Beams
Authors: Felix Eyben, Simon Schaffrath, Markus Feldmann
Abstract:
A building should maximize the potential for use through its design. Therefore, flexible use is always important when designing a steel structure. To create flexibility, steel beams with web openings are increasingly used, because these offer the advantage that cables, pipes and other technical equipment can easily be routed through without detours, allowing for more space-saving and aesthetically pleasing construction. This can also significantly reduce the height of ceiling systems. Until now, beams with web openings were not explicitly considered in the European standard. However, this is to be done with the new EN 1993-1-13, in which design rules for different opening forms are defined. In order to further develop the design concepts, beams with web openings under bending are therefore to be investigated in terms of damage mechanics as part of a German national research project aiming to optimize the verifications for steel structures based on a wider database and a validated damage prediction. For this purpose, first, fundamental factors influencing the load-bearing behavior of girders with web openings under bending load were investigated numerically without taking material damage into account. Various parameter studies were carried out for this purpose. For example, the factors under study were the opening shape, size and position as well as structural aspects as the span length, arrangement of stiffeners and loading situation. The load-bearing behavior is evaluated using resulting load-deformation curves. These results are compared with the design rules and critically analyzed. Experimental tests are also planned based on these results. Moreover, the implementation of damage mechanics in the form of the modified Bai-Wierzbicki model was examined. After the experimental tests will have been carried out, the numerical models are validated and further influencing factors will be investigated on the basis of parametric studies.Keywords: damage mechanics, finite element, steel structures, web openings
Procedia PDF Downloads 1741421 3D Modeling Approach for Cultural Heritage Structures: The Case of Virgin of Loreto Chapel in Cusco, Peru
Authors: Rony Reátegui, Cesar Chácara, Benjamin Castañeda, Rafael Aguilar
Abstract:
Nowadays, heritage building information modeling (HBIM) is considered an efficient tool to represent and manage information of cultural heritage (CH). The basis of this tool relies on a 3D model generally obtained from a cloud-to-BIM procedure. There are different methods to create an HBIM model that goes from manual modeling based on the point cloud to the automatic detection of shapes and the creation of objects. The selection of these methods depends on the desired level of development (LOD), level of information (LOI), grade of generation (GOG), as well as on the availability of commercial software. This paper presents the 3D modeling of a stone masonry chapel using Recap Pro, Revit, and Dynamo interface following a three-step methodology. The first step consists of the manual modeling of simple structural (e.g., regular walls, columns, floors, wall openings, etc.) and architectural (e.g., cornices, moldings, and other minor details) elements using the point cloud as reference. Then, Dynamo is used for generative modeling of complex structural elements such as vaults, infills, and domes. Finally, semantic information (e.g., materials, typology, state of conservation, etc.) and pathologies are added within the HBIM model as text parameters and generic models families, respectively. The application of this methodology allows the documentation of CH following a relatively simple to apply process that ensures adequate LOD, LOI, and GOG levels. In addition, the easy implementation of the method as well as the fact of using only one BIM software with its respective plugin for the scan-to-BIM modeling process means that this methodology can be adopted by a larger number of users with intermediate knowledge and limited resources since the BIM software used has a free student license.Keywords: cloud-to-BIM, cultural heritage, generative modeling, HBIM, parametric modeling, Revit
Procedia PDF Downloads 1441420 Classifying Affective States in Virtual Reality Environments Using Physiological Signals
Authors: Apostolos Kalatzis, Ashish Teotia, Vishnunarayan Girishan Prabhu, Laura Stanley
Abstract:
Emotions are functional behaviors influenced by thoughts, stimuli, and other factors that induce neurophysiological changes in the human body. Understanding and classifying emotions are challenging as individuals have varying perceptions of their environments. Therefore, it is crucial that there are publicly available databases and virtual reality (VR) based environments that have been scientifically validated for assessing emotional classification. This study utilized two commercially available VR applications (Guided Meditation VR™ and Richie’s Plank Experience™) to induce acute stress and calm state among participants. Subjective and objective measures were collected to create a validated multimodal dataset and classification scheme for affective state classification. Participants’ subjective measures included the use of the Self-Assessment Manikin, emotional cards and 9 point Visual Analogue Scale for perceived stress, collected using a Virtual Reality Assessment Tool developed by our team. Participants’ objective measures included Electrocardiogram and Respiration data that were collected from 25 participants (15 M, 10 F, Mean = 22.28 4.92). The features extracted from these data included heart rate variability components and respiration rate, both of which were used to train two machine learning models. Subjective responses validated the efficacy of the VR applications in eliciting the two desired affective states; for classifying the affective states, a logistic regression (LR) and a support vector machine (SVM) with a linear kernel algorithm were developed. The LR outperformed the SVM and achieved 93.8%, 96.2%, 93.8% leave one subject out cross-validation accuracy, precision and recall, respectively. The VR assessment tool and data collected in this study are publicly available for other researchers.Keywords: affective computing, biosignals, machine learning, stress database
Procedia PDF Downloads 1421419 Oxidation and Reduction Kinetics of Ni-Based Oxygen Carrier for Chemical Looping Combustion
Authors: J. H. Park, R. H. Hwang, K. B. Yi
Abstract:
Carbon Capture and Storage (CCS) is one of the important technology to reduce the CO₂ emission from large stationary sources such as a power plant. Among the carbon technologies for power plants, chemical looping combustion (CLC) has attracted much attention due to a higher thermal efficiency and a lower cost of electricity. A CLC process is consists of a fuel reactor and an air reactor which are interconnected fluidized bed reactor. In the fuel reactor, an oxygen carrier (OC) is reduced by fuel gas such as CH₄, H₂, CO. And the OC is send to air reactor and oxidized by air or O₂ gas. The oxidation and reduction reaction of OC occurs between the two reactors repeatedly. In the CLC system, high concentration of CO₂ can be easily obtained by steam condensation only from the fuel reactor. It is very important to understand the oxidation and reduction characteristics of oxygen carrier in the CLC system to determine the solids circulation rate between the air and fuel reactors, and the amount of solid bed materials. In this study, we have conducted the experiment and interpreted oxidation and reduction reaction characteristics via observing weight change of Ni-based oxygen carrier using the TGA with varying as concentration and temperature. Characterizations of the oxygen carrier were carried out with BET, SEM. The reaction rate increased with increasing the temperature and increasing the inlet gas concentration. We also compared experimental results and adapted basic reaction kinetic model (JMA model). JAM model is one of the nucleation and nuclei growth models, and this model can explain the delay time at the early part of reaction. As a result, the model data and experimental data agree over the arranged conversion and time with overall variance (R²) greater than 98%. Also, we calculated activation energy, pre-exponential factor, and reaction order through the Arrhenius plot and compared with previous Ni-based oxygen carriers.Keywords: chemical looping combustion, kinetic, nickel-based, oxygen carrier, spray drying method
Procedia PDF Downloads 2091418 Multiple-Material Flow Control in Construction Supply Chain with External Storage Site
Authors: Fatmah Almathkour
Abstract:
Managing and controlling the construction supply chain (CSC) are very important components of effective construction project execution. The goals of managing the CSC are to reduce uncertainty and optimize the performance of a construction project by improving efficiency and reducing project costs. The heart of much SC activity is addressing risk, and the CSC is no different. The delivery and consumption of construction materials is highly variable due to the complexity of construction operations, rapidly changing demand for certain components, lead time variability from suppliers, transportation time variability, and disruptions at the job site. Current notions of managing and controlling CSC, involve focusing on one project at a time with a push-based material ordering system based on the initial construction schedule and, then, holding a tremendous amount of inventory. A two-stage methodology was proposed to coordinate the feed-forward control of advanced order placement with a supplier to a feedback local control in the form of adding the ability to transship materials between projects to improve efficiency and reduce costs. It focused on the single supplier integrated production and transshipment problem with multiple products. The methodology is used as a design tool for the CSC because it includes an external storage site not associated with one of the projects. The idea is to add this feature to a highly constrained environment to explore its effectiveness in buffering the impact of variability and maintaining project schedule at low cost. The methodology uses deterministic optimization models with objectives that minimizing the total cost of the CSC. To illustrate how this methodology can be used in practice and the types of information that can be gleaned, it is tested on a number of cases based on the real example of multiple construction projects in Kuwait.Keywords: construction supply chain, inventory control supply chain, transshipment
Procedia PDF Downloads 1221417 Expanding Entrepreneurial Capabilities through Business Incubators: A Case Study of Idea Hub Nigeria
Authors: Kenechukwu Ikebuaku
Abstract:
Entrepreneurship has long been offered as the panacea for poor economic growth and high rate of unemployment. Business incubation is considered an effective means for enhancing entrepreneurial actitivities while engendering socio-economic development. Information Technology Developers Entrepreneurship Accelerator (iDEA), is a software business incubation programme established by the Nigerian government as a means of boosting digital entrepreneurship activities and reducing unemployment in the country. This study assessed the contribution of iDEA Nigeria’s entrepreneurship programmes towards enhancing the capabilities of its tenants. Using the capability approach and the sustainable livelihoods approach, the study analysed iDEA programmes’ contribution towards the expansion of participants’ entrepreneurial capabilities. Apart from identifying a set of entrepreneurial capabilities from both the literature and empirical analysis, the study went further to ascertain how iDEA incubation has helped to enhance those capabilities for its tenants. It also examined digital entrepreneurship as a valued functioning and as an intermediate functioning leading to other valuable functioning. Furthermore, the study examined gender as a conversion factor in digital entrepreneurship. Both qualitative and quantitative research methods were used for the study, and measurement of key variables was made. While the entire population was utilised to collect data for the quantitative research, purposive sampling was used to select respondents for semi-structured interviews in the qualitative research. However, only 40 beneficiaries agreed to take part in the survey while 10 respondents were interviewed for the study. Responses collected from questionnaires administered were subjected to statistical analysis using SPSS. The study developed indexes to measure the perception of the respondents, on how iDEA programmes have enhanced their entrepreneurial capabilities. The Capabilities Enhancement Perception Index (CEPI) computed indicated that the respondents believed that iDEA programmes enhanced their entrepreneurial capabilities. While access to power supply and reliable internet have the highest positive deviations around mean, negotiation skills and access to customers/clients have the highest negative deviation. These were well supported by the findings of the qualitative analysis in which the participants unequivocally narrated how the resources provided by iDEA aid them in their entrepreneurial endeavours. It was also found that iDEA programmes have a significant effect on the tenants’ access to networking opportunities, both with other emerging entrepreneurs and established entrepreneurs. While assessing gender as a conversion factor, it was discovered that there was very low female participation within the digital entrepreneurship ecosystem. The root cause of this gender disparity was found in unquestioned cultural beliefs and social norms which relegate women to a subservient position and household duties. The findings also showed that many of the entrepreneurs could be considered opportunity-based entrepreneurs rather than necessity entrepreneurs, and that digital entrepreneurship is a valued functioning for iDEA tenants. With regards to challenges facing digital entrepreneurship in Nigeria, infrastructural/institutional inadequacies, lack of funding opportunities, and unfavourable government policies, were considered inimical to entrepreneurial capabilities in the country.Keywords: entrepreneurial capabilities, unemployment, business incubators, development
Procedia PDF Downloads 2361416 Load Forecasting in Microgrid Systems with R and Cortana Intelligence Suite
Authors: F. Lazzeri, I. Reiter
Abstract:
Energy production optimization has been traditionally very important for utilities in order to improve resource consumption. However, load forecasting is a challenging task, as there are a large number of relevant variables that must be considered, and several strategies have been used to deal with this complex problem. This is especially true also in microgrids where many elements have to adjust their performance depending on the future generation and consumption conditions. The goal of this paper is to present a solution for short-term load forecasting in microgrids, based on three machine learning experiments developed in R and web services built and deployed with different components of Cortana Intelligence Suite: Azure Machine Learning, a fully managed cloud service that enables to easily build, deploy, and share predictive analytics solutions; SQL database, a Microsoft database service for app developers; and PowerBI, a suite of business analytics tools to analyze data and share insights. Our results show that Boosted Decision Tree and Fast Forest Quantile regression methods can be very useful to predict hourly short-term consumption in microgrids; moreover, we found that for these types of forecasting models, weather data (temperature, wind, humidity and dew point) can play a crucial role in improving the accuracy of the forecasting solution. Data cleaning and feature engineering methods performed in R and different types of machine learning algorithms (Boosted Decision Tree, Fast Forest Quantile and ARIMA) will be presented, and results and performance metrics discussed.
Keywords: time-series, features engineering methods for forecasting, energy demand forecasting, Azure Machine Learning
Procedia PDF Downloads 2971415 dynr.mi: An R Program for Multiple Imputation in Dynamic Modeling
Authors: Yanling Li, Linying Ji, Zita Oravecz, Timothy R. Brick, Michael D. Hunter, Sy-Miin Chow
Abstract:
Assessing several individuals intensively over time yields intensive longitudinal data (ILD). Even though ILD provide rich information, they also bring other data analytic challenges. One of these is the increased occurrence of missingness with increased study length, possibly under non-ignorable missingness scenarios. Multiple imputation (MI) handles missing data by creating several imputed data sets, and pooling the estimation results across imputed data sets to yield final estimates for inferential purposes. In this article, we introduce dynr.mi(), a function in the R package, Dynamic Modeling in R (dynr). The package dynr provides a suite of fast and accessible functions for estimating and visualizing the results from fitting linear and nonlinear dynamic systems models in discrete as well as continuous time. By integrating the estimation functions in dynr and the MI procedures available from the R package, Multivariate Imputation by Chained Equations (MICE), the dynr.mi() routine is designed to handle possibly non-ignorable missingness in the dependent variables and/or covariates in a user-specified dynamic systems model via MI, with convergence diagnostic check. We utilized dynr.mi() to examine, in the context of a vector autoregressive model, the relationships among individuals’ ambulatory physiological measures, and self-report affect valence and arousal. The results from MI were compared to those from listwise deletion of entries with missingness in the covariates. When we determined the number of iterations based on the convergence diagnostics available from dynr.mi(), differences in the statistical significance of the covariate parameters were observed between the listwise deletion and MI approaches. These results underscore the importance of considering diagnostic information in the implementation of MI procedures.Keywords: dynamic modeling, missing data, mobility, multiple imputation
Procedia PDF Downloads 1641414 Potential Effects of Climate Change on Streamflow, Based on the Occurrence of Severe Floods in Kelantan, East Coasts of Peninsular Malaysia River Basin
Authors: Muhd. Barzani Gasim, Mohd. Ekhwan Toriman, Mohd. Khairul Amri Kamarudin, Azman Azid, Siti Humaira Haron, Muhammad Hafiz Md. Saad
Abstract:
Malaysia is a country in Southeast Asia that constantly exposed to flooding and landslide. The disaster has caused some troubles such loss of property, loss of life and discomfort of people involved. This problem occurs as a result of climate change leading to increased stream flow rate as a result of disruption to regional hydrological cycles. The aim of the study is to determine hydrologic processes in the east coasts of Peninsular Malaysia, especially in Kelantan Basin. Parameterized to account for the spatial and temporal variability of basin characteristics and their responses to climate variability. For hydrological modeling of the basin, the Soil and Water Assessment Tool (SWAT) model such as relief, soil type, and its use, and historical daily time series of climate and river flow rates are studied. The interpretation of Landsat map/land uses will be applied in this study. The combined of SWAT and climate models, the system will be predicted an increase in future scenario climate precipitation, increase in surface runoff, increase in recharge and increase in the total water yield. As a result, this model has successfully developed the basin analysis by demonstrating analyzing hydrographs visually, good estimates of minimum and maximum flows and severe floods observed during calibration and validation periods.Keywords: east coasts of Peninsular Malaysia, Kelantan river basin, minimum and maximum flows, severe floods, SWAT model
Procedia PDF Downloads 2621413 Design and Development of a Mechanical Force Gauge for the Square Watermelon Mold
Authors: Morteza Malek Yarand, Hadi Saebi Monfared
Abstract:
This study aimed at designing and developing a mechanical force gauge for the square watermelon mold for the first time. It also tried to introduce the square watermelon characteristics and its production limitations. The mechanical force gauge performance and the product itself were also described. There are three main designable gauge models: a. hydraulic gauge, b. strain gauge, and c. mechanical gauge. The advantage of the hydraulic model is that it instantly displays the pressure and thus the force exerted by the melon. However, considering the inability to measure forces at all directions, complicated development, high cost, possible hydraulic fluid leak into the fruit chamber and the possible influence of increased ambient temperature on the fluid pressure, the development of this gauge was overruled. The second choice was to calculate pressure using the direct force a strain gauge. The main advantage of these strain gauges over spring types is their high precision in measurements; but with regard to the lack of conformity of strain gauge working range with water melon growth, calculations were faced with problems. Finally the mechanical pressure gauge has advantages, including the ability to measured forces and pressures on the mold surface during melon growth; the ability to display the peak forces; the ability to produce melon growth graph thanks to its continuous force measurements; the conformity of its manufacturing materials with the required physical conditions of melon growth; high air conditioning capability; the ability to permit sunlight reaches the melon rind (no yellowish skin and quality loss); fast and straightforward calibration; no damages to the product during assembling and disassembling; visual check capability of the product within the mold; applicable to all growth environments (field, greenhouses, etc.); simple process; low costs and so forth.Keywords: mechanical force gauge, mold, reshaped fruit, square watermelon
Procedia PDF Downloads 2731412 Dietary Pattern derived by Reduced Rank Regression is Associated with Reduced Cognitive Impairment Risk in Singaporean Older Adults
Authors: Kaisy Xinhong Ye, Su Lin Lim, Jialiang Li, Lei Feng
Abstract:
background: Multiple healthful dietary patterns have been linked with dementia, but limited studies have looked at the role of diet in cognitive health in Asians whose eating habits are very different from their counterparts in the west. This study aimed to derive a dietary pattern that is associated with the risk of cognitive impairment (CI) in the Singaporean population. Method: The analysis was based on 719 community older adults aged 60 and above. Dietary intake was measured using a validated semi-quantitative food-frequency questionnaire (FFQ). Reduced rank regression (RRR) was used to extract dietary pattern from 45 food groups, specifying sugar, dietary fiber, vitamin A, calcium, and the ratio of polyunsaturated fat to saturated fat intake (P:S ratio) as response variables. The RRR-derived dietary patterns were subsequently investigated using multivariate logistic regression models to look for associations with the risk of CI. Results: A dietary pattern characterized by greater intakes of green leafy vegetables, red-orange vegetables, wholegrains, tofu, nuts, and lower intakes of biscuits, pastries, local sweets, coffee, poultry with skin, sugar added to beverages, malt beverages, roti, butter, and fast food was associated with reduced risk of CI [multivariable-adjusted OR comparing extreme quintiles, 0.29 (95% CI: 0.11, 0.77); P-trend =0.03]. This pattern was positively correlated with P:S ratio, vitamin A, and dietary fiber and negatively correlated with sugar. Conclusion: A dietary pattern providing high P:S ratio, vitamin A and dietary fiber, and a low level of sugar may reduce the risk of cognitive impairment in old age. The findings have significance in guiding local Singaporeans to dementia prevention through food-based dietary approaches.Keywords: dementia, cognitive impairment, diet, nutrient, elderly
Procedia PDF Downloads 821411 Savinglife®: An Educational Technology for Basic and Advanced Cardiovascular Life Support
Authors: Naz Najma, Grace T. M. Dal Sasso, Maria de Lourdes de Souza
Abstract:
The development of information and communication technologies and the accessibility of mobile devices has increased the possibilities of the teaching and learning process anywhere and anytime. Mobile and web application allows the production of constructive teaching and learning models in various educational settings, showing the potential for active learning in nursing. The objective of this study was to present the development of an educational technology (Savinglife®, an app) for learning cardiopulmonary resuscitation and advanced cardiovascular life support training. Savinglife® is a technological production, based on the concept of virtual learning and problem-based learning approach. The study was developed from January 2016 to November 2016, using five phases (analyze, design, develop, implement, evaluate) of the instructional systems development process. The technology presented 10 scenarios and 12 simulations, covering different aspects of basic and advanced cardiac life support. The contents can be accessed in a non-linear way leaving the students free to build their knowledge based on their previous experience. Each scenario is presented through interactive tools such as scenario description, assessment, diagnose, intervention and reevaluation. Animated ECG rhythms, text documents, images and videos are provided to support procedural and active learning considering real life situation. Accessible equally on small to large devices with or without an internet connection, Savinglife® offers a dynamic, interactive and flexible tool, placing students at the center of the learning process. Savinglife® can contribute to the student’s learning in the assessment and management of basic and advanced cardiac life support in a safe and ethical way.Keywords: problem-based learning, cardiopulmonary resuscitation, nursing education, advanced cardiac life support, educational technology
Procedia PDF Downloads 3051410 Optimum Structural Wall Distribution in Reinforced Concrete Buildings Subjected to Earthquake Excitations
Authors: Nesreddine Djafar Henni, Akram Khelaifia, Salah Guettala, Rachid Chebili
Abstract:
Reinforced concrete shear walls and vertical plate-like elements play a pivotal role in efficiently managing a building's response to seismic forces. This study investigates how the performance of reinforced concrete buildings equipped with shear walls featuring different shear wall-to-frame stiffness ratios aligns with the requirements stipulated in the Algerian seismic code RPA99v2003, particularly in high-seismicity regions. Seven distinct 3D finite element models are developed and evaluated through nonlinear static analysis. Engineering Demand Parameters (EDPs) such as lateral displacement, inter-story drift ratio, shear force, and bending moment along the building height are analyzed. The findings reveal two predominant categories of induced responses: force-based and displacement-based EDPs. Furthermore, as the shear wall-to-frame ratio increases, there is a concurrent increase in force-based EDPs and a decrease in displacement-based ones. Examining the distribution of shear walls from both force and displacement perspectives, model G with the highest stiffness ratio, concentrating stiffness at the building's center, intensifies induced forces. This configuration necessitates additional reinforcements, leading to a conservative design approach. Conversely, model C, with the lowest stiffness ratio, distributes stiffness towards the periphery, resulting in minimized induced shear forces and bending moments, representing an optimal scenario with maximal performance and minimal strength requirements.Keywords: dual RC buildings, RC shear walls, modeling, static nonlinear pushover analysis, optimization, seismic performance
Procedia PDF Downloads 561409 Experimental and Simulation Analysis of an Innovative Steel Shear Wall with Semi-Rigid Beam-to-Column Connections
Authors: E. Faizan, Wahab Abdul Ghafar, Tao Zhong
Abstract:
Steel plate shear walls (SPSWs) are a robust lateral load resistance structure because of their high flexibility and efficient energy dissipation when subjected to seismic loads. This research investigates the seismic performance of an innovative infill web strip (IWS-SPSW) and a typical unstiffened steel plate shear wall (USPSW). As a result, two 1:3 scale specimens of an IWS-SPSW and USPSW with a single story and a single bay were built and subjected to a cyclic lateral loading methodology. In the prototype, the beam-to-column connections were accomplished with the assistance of semi-rigid end-plate connectors. IWS-SPSW demonstrated exceptional ductility and shear load-bearing capacity during the testing process, with no cracks or other damage occurring. In addition, the IWS-SPSW could effectively dissipate energy without causing a significant amount of beam-column connection distortion. The shear load-bearing capacity of the USPSW was exceptional. However, it exhibited low ductility, severe infill plate corner ripping, and huge infill web plate cracks. The FE models were created and then confirmed using the experimental data. It has been demonstrated that the infill web strips of an SPSW system can affect the system's high performance and total energy dissipation. In addition, a parametric analysis was carried out to evaluate the material qualities of the IWS, which can considerably improve the system's seismic performances. These properties include the steel's strength as well as its thickness.Keywords: steel shear walls, seismic performance, failure mode, hysteresis response, nonlinear finite element analysis, parametric study
Procedia PDF Downloads 75