Search results for: sustainable supply chain performance
84 Probability Modeling and Genetic Algorithms in Small Wind Turbine Design Optimization: Mentored Interdisciplinary Undergraduate Research at LaGuardia Community College
Authors: Marina Nechayeva, Malgorzata Marciniak, Vladimir Przhebelskiy, A. Dragutan, S. Lamichhane, S. Oikawa
Abstract:
This presentation is a progress report on a faculty-student research collaboration at CUNY LaGuardia Community College (LaGCC) aimed at designing a small horizontal axis wind turbine optimized for the wind patterns on the roof of our campus. Our project combines statistical and engineering research. Our wind modeling protocol is based upon a recent wind study by a faculty-student research group at MIT, and some of our blade design methods are adopted from a senior engineering project at CUNY City College. Our use of genetic algorithms has been inspired by the work on small wind turbines’ design by David Wood. We combine these diverse approaches in our interdisciplinary project in a way that has not been done before and improve upon certain techniques used by our predecessors. We employ several estimation methods to determine the best fitting parametric probability distribution model for the local wind speed data obtained through correlating short-term on-site measurements with a long-term time series at the nearby airport. The model serves as a foundation for engineering research that focuses on adapting and implementing genetic algorithms (GAs) to engineering optimization of the wind turbine design using Blade Element Momentum Theory. GAs are used to create new airfoils with desirable aerodynamic specifications. Small scale models of best performing designs are 3D printed and tested in the wind tunnel to verify the accuracy of relevant calculations. Genetic algorithms are applied to selected airfoils to determine the blade design (radial cord and pitch distribution) that would optimize the coefficient of power profile of the turbine. Our approach improves upon the traditional blade design methods in that it lets us dispense with assumptions necessary to simplify the system of Blade Element Momentum Theory equations, thus resulting in more accurate aerodynamic performance calculations. Furthermore, it enables us to design blades optimized for a whole range of wind speeds rather than a single value. Lastly, we improve upon known GA-based methods in that our algorithms are constructed to work with XFoil generated airfoils data which enables us to optimize blades using our own high glide ratio airfoil designs, without having to rely upon available empirical data from existing airfoils, such as NACA series. Beyond its immediate goal, this ongoing project serves as a training and selection platform for CUNY Research Scholars Program (CRSP) through its annual Aerodynamics and Wind Energy Research Seminar (AWERS), an undergraduate summer research boot camp, designed to introduce prospective researchers to the relevant theoretical background and methodology, get them up to speed with the current state of our research, and test their abilities and commitment to the program. Furthermore, several aspects of the research (e.g., writing code for 3D printing of airfoils) are adapted in the form of classroom research activities to enhance Calculus sequence instruction at LaGCC.Keywords: engineering design optimization, genetic algorithms, horizontal axis wind turbine, wind modeling
Procedia PDF Downloads 23283 A Proposed Treatment Protocol for the Management of Pars Interarticularis Pathology in Children and Adolescents
Authors: Paul Licina, Emma M. Johnston, David Lisle, Mark Young, Chris Brady
Abstract:
Background: Lumbar pars pathology is a common cause of pain in the growing spine. It can be seen in young athletes participating in at-risk sports and can affect sporting performance and long-term health due to its resistance to traditional management. There is a current lack of consensus of classification and treatment for pars injuries. Previous systems used CT to stage pars defects but could not assess early stress reactions. A modified classification is proposed that considers findings on MRI, significantly improving early treatment guidance. The treatment protocol is designed for patients aged 5 to 19 years. Method: Clinical screening identifies patients with a low, medium, or high index of suspicion for lumbar pars injury using patient age, sport participation and pain characteristics. MRI of the at-risk cohort enables augmentation of existing CT-based classification while avoiding ionising radiation. Patients are classified into five categories based on MRI findings. A type 0 lesion (stress reaction) is present when CT is normal and MRI shows high signal change (HSC) in the pars/pedicle on T2 images. A type 1 lesion represents the ‘early defect’ CT classification. The group previously referred to as a 'progressive stage' defect on CT can be split into 2A and 2B categories. 2As have HSC on MRI, whereas 2Bs do not. This distinction is important with regard to healing potential. Type 3 lesions are terminal stage defects on CT, characterised by pseudarthrosis. MRI shows no HSC. Results: Stress reactions (type 0) and acute fractures (1 and 2a) can heal and are treated in a custom-made hard brace for 12 weeks. It is initially worn 23 hours per day. At three weeks, patients commence basic core rehabilitation. At six weeks, in the absence of pain, the brace is removed for sleeping. Exercises are progressed to positions of daily living. Patients with continued pain remain braced 23 hours per day without exercise progression until becoming symptom-free. At nine weeks, patients commence supervised exercises out of the brace for 30 minutes each day. This allows them to re-learn muscular control without rigid support of the brace. At 12 weeks, bracing ceases and MRI is repeated. For patients with near or complete resolution of bony oedema and healing of any cortical defect, rehabilitation is focused on strength and conditioning and sport-specific exercise for the full return to activity. The length of this final stage is approximately nine weeks but depends on factors such as development and level of sports participation. If significant HSC remains on MRI, CT scan is considered to definitively assess cortical defect healing. For these patients, return to high-risk sports is delayed for up to three months. Chronic defects (2b and 3) cannot heal and are not braced, and rehabilitation follows traditional protocols. Conclusion: Appropriate clinical screening and imaging with MRI can identify pars pathology early. In those with potential for healing, we propose hard bracing and appropriate rehabilitation as part of a multidisciplinary management protocol. The validity of this protocol will be tested in future studies.Keywords: adolescents, MRI classification, pars interticularis, treatment protocol
Procedia PDF Downloads 15382 Integrated Approach Towards Safe Wastewater Reuse in Moroccan Agriculture
Authors: Zakia Hbellaq
Abstract:
The Mediterranean region is considered a hotbed for climate change. Morocco is a semi-arid Mediterranean country facing water shortages and poor water quality. Its limited water resources limit the activities of various economic sectors. Most of Morocco's territory is in arid and desert areas. The potential water resources are estimated at 22 billion m3, which is equivalent to about 700 m3/inhabitant/year, and Morocco is in a state of structural water stress. Strictly speaking, the Kingdom of Morocco is one of the “very riskiest” countries, according to the World Resources Institute (WRI), which oversees the calculation of water stress risk in 167 countries. The surprising results of the Institute (WRI) rank Morocco as one of the riskiest countries in terms of water scarcity, ranking 3.89 out of 5, thus occupying the 23rd place out of a total of 167 countries, which indicates that the demand for water exceeds the available resources. Agriculture with a score of 3.89 is most affected by water stress from irrigation and places a heavy burden on the water table. Irrigation is an unavoidable technical need and has undeniable economic and social benefits given the available resources and climatic conditions. Irrigation, and therefore the agricultural sector, currently uses 86% of its water resources, while industry uses 5.5%. Although its development has undeniable economic and social benefits, it also contributes to the overfishing of most groundwater resources and the surprising decline in levels and deterioration of water quality in some aquifers. In this context, REUSE is one of the proposed solutions to reduce the water footprint of the agricultural sector and alleviate the shortage of water resources. Indeed, wastewater reuse, also known as REUSE (reuse of treated wastewater), is a step forward not only for the circular economy but also for the future, especially in the context of climate change. In particular, water reuse provides an alternative to existing water supplies and can be used to improve water security, sustainability, and resilience. However, given the introduction of organic trace pollutants or, organic micro-pollutants, the absorption of emerging contaminants, and decreasing salinity, it is possible to tackle innovative capabilities to overcome these problems and ensure food and health safety. To this end, attention will be paid to the adoption of an integrated and attractive approach, based on the reinforcement and optimization of the treatments proposed for the elimination of the organic load with particular attention to the elimination of emerging pollutants, to achieve this goal. , membrane bioreactors (MBR) as stand-alone technologies are not able to meet the requirements of WHO guidelines. They will be combined with heterogeneous Fenton processes using persulfate or hydrogen peroxide oxidants. Similarly, adsorption and filtration are applied as tertiary treatment In addition, the evaluation of crop performance in terms of yield, productivity, quality, and safety, through the optimization of Trichoderma sp strains that will be used to increase crop resistance to abiotic stresses, as well as the use of modern omics tools such as transcriptomic analysis using RNA sequencing and methylation to identify adaptive traits and associated genetic diversity that is tolerant/resistant/resilient to biotic and abiotic stresses. Hence, ensuring this approach will undoubtedly alleviate water scarcity and, likewise, increase the negative and harmful impact of wastewater irrigation on the condition of crops and the health of their consumers.Keywords: water scarcity, food security, irrigation, agricultural water footprint, reuse, emerging contaminants
Procedia PDF Downloads 16381 Chemopreventive Properties of Cannabis sativa L. var. USO31 in Relation to Its Phenolic and Terpenoid Content
Authors: Antonella Di Sotto, Cinzia Ingallina, Caterina Fraschetti, Simone Circi, Marcello Locatelli, Simone Carradori, Gabriela Mazzanti, Luisa Mannina, Silvia Di Giacomo
Abstract:
Cannabis sativa L. is one of the oldest cultivated plant species known not only for its voluptuous use but also for the wide application in food, textile, and therapeutic industries. Recently, the progress of biotechnologies applied to medicinal plants has allowed to produce different hemp varieties with low content of psychotropic phytoconstituents (tetrahydrocannabinol < 0.2% w/v), thus leading to a renewed industrial and therapeutic interest for this plant. In this context, in order to discover new potential remedies of pharmaceutical and/or nutraceutical interest, the chemopreventive properties of different organic and hydroalcoholic extracts, obtained from the inflorescences of C. sativa L. var. USO31, collected in June and September harvesting, were assessed. Particularly, the antimutagenic activity towards the oxidative DNA-damage induced by tert-butyl hydroperoxide (t-BOOH) was evaluated, and the DPPH (2,2-diphenyl-1-picrylhydrazyl) and ABTS (2,2'-azino-bis-3-ethylbenzthiazoline-6-sulphonic acid) radical scavenging power of the samples were assessed as possible mechanisms of antimutagenicity. Furthermore, the ability of the extracts to inhibit the glucose-6-phosphate dehydrogenase (G6PD), whose overexpression has been found to play a critical role in neoplastic transformation and tumor progression, has been studied as a possible chemopreventive strategy. A careful phytochemical characterization of the extracts for phenolic and terpenoid composition has been obtained by high performance liquid chromatography (HPLC) and gas chromatography-mass spectrometry (GC-MS) methods. Under our experimental condition, all the extracts were found able to interfere with the tBOOH-induced mutagenicity in WP2uvrAR strain, although with different potency and effectiveness. The organic extracts from both the harvesting periods were found to be the main effective antimutagenic samples, reaching about a 55% inhibition of the tBOOH-mutagenicity at the highest concentration tested (250 μg/ml). All the extracts exhibited radical scavenger activity against DPPH and ABTS radicals, with a higher potency of the hydroalcoholic samples. The organic extracts were also able to inhibit the G6PD enzyme, being the samples from September harvesting the highly potent (about 50% inhibition respect to the vehicle). At the phytochemical analysis, all the extracts resulted to contain both polar and apolar phenolic compounds. The HPLC analysis revealed the presence of catechin and rutin as the major constituents of the hydroalcoholic extracts, with lower levels of quercetin and ferulic acid. The monoterpene carvacrol was found to be an ubiquitarian constituent. At GC-MS analysis, different terpenoids, among which caryophyllene sesquiterpenes, were identified. This evidence suggests a possible role of both polyphenols and terpenoids in the chemopreventive properties of the extracts from the inflorescences of C. sativa var. USO31. According to the literature, carvacrol and caryophyllene sesquiterpenes can contribute to the strong antimutagenicity although the role of all the hemp phytocomplex cannot be excluded. In conclusion, present results highlight a possible interest for the inflorescences of C. sativa var. USO31 as source of bioactive molecules and stimulate further studies in order to characterize its possible application for nutraceutical and pharmaceutical purposes.Keywords: antimutagenicity, glucose-6-phosphate dehydrogenase, hemp inflorescences, nutraceuticals, sesquiterpenes
Procedia PDF Downloads 15880 Design and Construction of a Home-Based, Patient-Led, Therapeutic, Post-Stroke Recovery System Using Iterative Learning Control
Authors: Marco Frieslaar, Bing Chu, Eric Rogers
Abstract:
Stroke is a devastating illness that is the second biggest cause of death in the world (after heart disease). Where it does not kill, it leaves survivors with debilitating sensory and physical impairments that not only seriously harm their quality of life, but also cause a high incidence of severe depression. It is widely accepted that early intervention is essential for recovery, but current rehabilitation techniques largely favor hospital-based therapies which have restricted access, expensive and specialist equipment and tend to side-step the emotional challenges. In addition, there is insufficient funding available to provide the long-term assistance that is required. As a consequence, recovery rates are poor. The relatively unexplored solution is to develop therapies that can be harnessed in the home and are formulated from technologies that already exist in everyday life. This would empower individuals to take control of their own improvement and provide choice in terms of when and where they feel best able to undertake their own healing. This research seeks to identify how effective post-stroke, rehabilitation therapy can be applied to upper limb mobility, within the physical context of a home rather than a hospital. This is being achieved through the design and construction of an automation scheme, based on iterative learning control and the Riener muscle model, that has the ability to adapt to the user and react to their level of fatigue and provide tangible physical recovery. It utilizes a SMART Phone and laptop to construct an iterative learning control (ILC) system, that monitors upper arm movement in three dimensions, as a series of exercises are undertaken. The equipment generates functional electrical stimulation to assist in muscle activation and thus improve directional accuracy. In addition, it monitors speed, accuracy, areas of motion weakness and similar parameters to create a performance index that can be compared over time and extrapolated to establish an independent and objective assessment scheme, plus an approximate estimation of predicted final outcome. To further extend its assessment capabilities, nerve conduction velocity readings are taken by the software, between the shoulder and hand muscles. This is utilized to measure the speed of response of neuron signal transfer along the arm and over time, an online indication of regeneration levels can be obtained. This will prove whether or not sufficient training intensity is being achieved even before perceivable movement dexterity is observed. The device also provides the option to connect to other users, via the internet, so that the patient can avoid feelings of isolation and can undertake movement exercises together with others in a similar position. This should create benefits not only for the encouragement of rehabilitation participation, but also an emotional support network potential. It is intended that this approach will extend the availability of stroke recovery options, enable ease of access at a low cost, reduce susceptibility to depression and through these endeavors, enhance the overall recovery success rate.Keywords: home-based therapy, iterative learning control, Riener muscle model, SMART phone, stroke rehabilitation
Procedia PDF Downloads 26579 Integrating the Modbus SCADA Communication Protocol with Elliptic Curve Cryptography
Authors: Despoina Chochtoula, Aristidis Ilias, Yannis Stamatiou
Abstract:
Modbus is a protocol that enables the communication among devices which are connected to the same network. This protocol is, often, deployed in connecting sensor and monitoring units to central supervisory servers in Supervisory Control and Data Acquisition, or SCADA, systems. These systems monitor critical infrastructures, such as factories, power generation stations, nuclear power reactors etc. in order to detect malfunctions and ignite alerts and corrective actions. However, due to their criticality, SCADA systems are vulnerable to attacks that range from simple eavesdropping on operation parameters, exchanged messages, and valuable infrastructure information to malicious modification of vital infrastructure data towards infliction of damage. Thus, the SCADA research community has been active over strengthening SCADA systems with suitable data protection mechanisms based, to a large extend, on cryptographic methods for data encryption, device authentication, and message integrity protection. However, due to the limited computation power of many SCADA sensor and embedded devices, the usual public key cryptographic methods are not appropriate due to their high computational requirements. As an alternative, Elliptic Curve Cryptography has been proposed, which requires smaller key sizes and, thus, less demanding cryptographic operations. Until now, however, no such implementation has been proposed in the SCADA literature, to the best of our knowledge. In order to fill this gap, our methodology was focused on integrating Modbus, a frequently used SCADA communication protocol, with Elliptic Curve based cryptography and develop a server/client application to demonstrate the proof of concept. For the implementation we deployed two C language libraries, which were suitably modify in order to be successfully integrated: libmodbus (https://github.com/stephane/libmodbus) and ecc-lib https://www.ceid.upatras.gr/webpages/faculty/zaro/software/ecc-lib/). The first library provides a C implementation of the Modbus/TCP protocol while the second one offers the functionality to develop cryptographic protocols based on Elliptic Curve Cryptography. These two libraries were combined, after suitable modifications and enhancements, in order to give a modified version of the Modbus/TCP protocol focusing on the security of the data exchanged among the devices and the supervisory servers. The mechanisms we implemented include key generation, key exchange/sharing, message authentication, data integrity check, and encryption/decryption of data. The key generation and key exchange protocols were implemented with the use of Elliptic Curve Cryptography primitives. The keys established by each device are saved in their local memory and are retained during the whole communication session and are used in encrypting and decrypting exchanged messages as well as certifying entities and the integrity of the messages. Finally, the modified library was compiled for the Android environment in order to run the server application as an Android app. The client program runs on a regular computer. The communication between these two entities is an example of the successful establishment of an Elliptic Curve Cryptography based, secure Modbus wireless communication session between a portable device acting as a supervisor station and a monitoring computer. Our first performance measurements are, also, very promising and demonstrate the feasibility of embedding Elliptic Curve Cryptography into SCADA systems, filling in a gap in the relevant scientific literature.Keywords: elliptic curve cryptography, ICT security, modbus protocol, SCADA, TCP/IP protocol
Procedia PDF Downloads 27678 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data
Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone
Abstract:
The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine
Procedia PDF Downloads 24177 Geotechnical Challenges for the Use of Sand-sludge Mixtures in Covers for the Rehabilitation of Acid-Generating Mine Sites
Authors: Mamert Mbonimpa, Ousseynou Kanteye, Élysée Tshibangu Ngabu, Rachid Amrou, Abdelkabir Maqsoud, Tikou Belem
Abstract:
The management of mine wastes (waste rocks and tailings) containing sulphide minerals such as pyrite and pyrrhotite represents the main environmental challenge for the mining industry. Indeed, acid mine drainage (AMD) can be generated when these wastes are exposed to water and air. AMD is characterized by low pH and high concentrations of heavy metals, which are toxic to plants, animals, and humans. It affects the quality of the ecosystem through water and soil pollution. Different techniques involving soil materials can be used to control AMD generation, including impermeable covers (compacted clays) and oxygen barriers. The latter group includes covers with capillary barrier effects (CCBE), a multilayered cover that include the moisture retention layer playing the role of an oxygen barrier. Once AMD is produced at a mine site, it must be treated so that the final effluent at the mine site complies with regulations and can be discharged into the environment. Active neutralization with lime is one of the treatment methods used. This treatment produces sludge that is usually stored in sedimentation ponds. Other sludge management alternatives have been examined in recent years, including sludge co-disposal with tailings or waste rocks, disposal in underground mine excavations, and storage in technical landfill sites. Considering the ability of AMD neutralization sludge to maintain an alkaline to neutral pH for decades or even centuries, due to the excess alkalinity induced by residual lime within the sludge, valorization of sludge in specific applications could be an interesting management option. If done efficiently, the reuse of sludge could free up storage ponds and thus reduce the environmental impact. It should be noted that mixtures of sludge and soils could potentially constitute usable materials in CCBE for the rehabilitation of acid-generating mine sites, while sludge alone is not suitable for this purpose. The high sludge water content (up to 300%), even after sedimentation, can, however, constitute a geotechnical challenge. Adding lime to the mixtures can reduce the water content and improve the geotechnical properties. The objective of this paper is to investigate the impact of the sludge content (30, 40 and 50%) in sand-sludge mixtures (SSM) on their hydrogeotechnical properties (compaction, shrinkage behaviour, saturated hydraulic conductivity, and water retention curve). The impact of lime addition (dosages from 2% to 6%) on the moisture content, dry density after compaction and saturated hydraulic conductivity of SSM was also investigated. Results showed that sludge adding to sand significantly improves the saturated hydraulic conductivity and water retention capacity, but the shrinkage increased with sludge content. The dry density after compaction of lime-treated SSM increases with the lime dosage but remains lower than the optimal dry density of the untreated mixtures. The saturated hydraulic conductivity of lime-treated SSM after 24 hours of cure decreases by 3 orders of magnitude. Considering the hydrogeotechnical properties obtained with these mixtures, it would be possible to design CCBE whose moisture retention layer is made of SSM. Physical laboratory models confirmed the performance of such CCBE.Keywords: mine waste, AMD neutralization sludge, sand-sludge mixture, hydrogeotechnical properties, mine site reclamation, CCBE
Procedia PDF Downloads 5776 Antibacterial Nanofibrous Film Encapsulated with 4-terpineol/β-cyclodextrin Inclusion Complexes: Relative Humidity-Triggered Release and Shrimp Preservation Application
Authors: Chuanxiang Cheng, Tiantian Min, Jin Yue
Abstract:
Antimicrobial active packaging enables extensive biological effects to improve food safety. However, the efficacy of antimicrobial packaging hinges on factors including the diffusion rate of the active agent toward the food surface, the initial content in the antimicrobial agent, and the targeted food shelf life. Among the possibilities of antimicrobial packaging design, an interesting approach involves the incorporation of volatile antimicrobial agents into the packaging material. In this case, the necessity for direct contact between the active packaging material and the food surface is mitigated, as the antimicrobial agent exerts its action through the packaging headspace atmosphere towards the food surface. However, it still remains difficult to achieve controlled and precise release of bioactive compounds to the specific target location with required quantity in food packaging applications. Remarkably, the development of stimuli-responsive materials for electrospinning has introduced the possibility of achieving controlled release of active agents under specific conditions, thereby yielding enduring biological effects. Relative humidity (RH) for the storage of food categories such as meat and aquatic products typically exceeds 90%. Consequently, high RH can be used as an abiotic trigger for the release of active agents to prevent microbial growth. Hence, a novel RH - responsive polyvinyl alcohol/chitosan (PVA/CS) composite nanofibrous film incorporated with 4-terpineol/β-cyclodextrin inclusion complexes (4-TA@β-CD ICs) was engineered by electrospinning that can be deposited as a functional packaging materials. The characterization results showed the thermal stability of the films was enhanced after the incorporation due to the hydrogen bonds between ICs and polymers. Remarkably, the 4 wt% 4-TA@β-CD ICs/PVA/CS film exhibited enhanced crystallinity, moderate hydrophilic (Water contact angle of 81.53°), light barrier property (Transparency of 1.96%) and water resistance (Water vapor permeability of 3.17 g mm/m2 h kPa). Moreover, this film also showed optimized mechanical performance with a Young’s modulus of 11.33 MPa, a tensile strength of 19.99 MPa and an elongation at break of 4.44 %. Notably, the antioxidant and antibacterial properties of this packaging material were significantly improved. The film demonstrated the half-inhibitory concentrations (IC50) values of 87.74% and 85.11% for scavenging 2,2-diphenyl-1-picrylhydrazyl (DPPH) and 2, 2′-azinobis (3-ethylbenzothiazoline-6-sulfonic) (ABTS) free radicals, respectively, in addition to an inhibition efficiency of 65% against Shewanella putrefaciens, the characteristic bacteria in aquatic products. Most importantly, the film achieved controlled release of 4-TA under high 98% RH by inducing the plasticization of polymers caused by water molecules, swelling of polymer chains, and destruction of hydrogen bonds within the cyclodextrin inclusion complex. Consequently, low relative humidity is suitable for the preservation of nanofibrous film, while high humidity conditions typical in fresh food packaging environments effectively stimulated the release of active compounds in the film. This film with a long-term antimicrobial effect successfully extended the shelf life of Litopenaeus vannamei shrimp to 7 days at 4 °C. This attractive design could pave the way for the development of new food packaging materials.Keywords: controlled release, electrospinning, nanofibrous film, relative humidity–responsive, shrimp preservation
Procedia PDF Downloads 7175 The Negative Effects of Controlled Motivation on Mathematics Achievement
Authors: John E. Boberg, Steven J. Bourgeois
Abstract:
The decline in student engagement and motivation through the middle years is well documented and clearly associated with a decline in mathematics achievement that persists through high school. To combat this trend and, very often, to meet high-stakes accountability standards, a growing number of parents, teachers, and schools have implemented various methods to incentivize learning. However, according to Self-Determination Theory, forms of incentivized learning such as public praise, tangible rewards, or threats of punishment tend to undermine intrinsic motivation and learning. By focusing on external forms of motivation that thwart autonomy in children, adults also potentially threaten relatedness measures such as trust and emotional engagement. Furthermore, these controlling motivational techniques tend to promote shallow forms of cognitive engagement at the expense of more effective deep processing strategies. Therefore, any short-term gains in apparent engagement or test scores are overshadowed by long-term diminished motivation, resulting in inauthentic approaches to learning and lower achievement. The current study focuses on the relationships between student trust, engagement, and motivation during these crucial years as students transition from elementary to middle school. In order to test the effects of controlled motivational techniques on achievement in mathematics, this quantitative study was conducted on a convenience sample of 22 elementary and middle schools from a single public charter school district in the south-central United States. The study employed multi-source data from students (N = 1,054), parents (N = 7,166), and teachers (N = 356), along with student achievement data and contextual campus variables. Cross-sectional questionnaires were used to measure the students’ self-regulated learning, emotional and cognitive engagement, and trust in teachers. Parents responded to a single item on incentivizing the academic performance of their child, and teachers responded to a series of questions about their acceptance of various incentive strategies. Structural equation modeling (SEM) was used to evaluate model fit and analyze the direct and indirect effects of the predictor variables on achievement. Although a student’s trust in teacher positively predicted both emotional and cognitive engagement, none of these three predictors accounted for any variance in achievement in mathematics. The parents’ use of incentives, on the other hand, predicted a student’s perception of his or her controlled motivation, and these two variables had significant negative effects on achievement. While controlled motivation had the greatest effects on achievement, parental incentives demonstrated both direct and indirect effects on achievement through the students’ self-reported controlled motivation. Comparing upper elementary student data with middle-school student data revealed that controlling forms of motivation may be taking their toll on student trust and engagement over time. While parental incentives positively predicted both cognitive and emotional engagement in the younger sub-group, such forms of controlling motivation negatively predicted both trust in teachers and emotional engagement in the middle-school sub-group. These findings support the claims, posited by Self-Determination Theory, about the dangers of incentivizing learning. Short-term gains belie the underlying damage to motivational processes that lead to decreased intrinsic motivation and achievement. Such practices also appear to thwart basic human needs such as relatedness.Keywords: controlled motivation, student engagement, incentivized learning, mathematics achievement, self-determination theory, student trust
Procedia PDF Downloads 22174 Modeling and Simulation of the Structural, Electronic and Magnetic Properties of Fe-Ni Based Nanoalloys
Authors: Ece A. Irmak, Amdulla O. Mekhrabov, M. Vedat Akdeniz
Abstract:
There is a growing interest in the modeling and simulation of magnetic nanoalloys by various computational methods. Magnetic crystalline/amorphous nanoparticles (NP) are interesting materials from both the applied and fundamental points of view, as their properties differ from those of bulk materials and are essential for advanced applications such as high-performance permanent magnets, high-density magnetic recording media, drug carriers, sensors in biomedical technology, etc. As an important magnetic material, Fe-Ni based nanoalloys have promising applications in the chemical industry (catalysis, battery), aerospace and stealth industry (radar absorbing material, jet engine alloys), magnetic biomedical applications (drug delivery, magnetic resonance imaging, biosensor) and computer hardware industry (data storage). The physical and chemical properties of the nanoalloys depend not only on the particle or crystallite size but also on composition and atomic ordering. Therefore, computer modeling is an essential tool to predict structural, electronic, magnetic and optical behavior at atomistic levels and consequently reduce the time for designing and development of new materials with novel/enhanced properties. Although first-principles quantum mechanical methods provide the most accurate results, they require huge computational effort to solve the Schrodinger equation for only a few tens of atoms. On the other hand, molecular dynamics method with appropriate empirical or semi-empirical inter-atomic potentials can give accurate results for the static and dynamic properties of larger systems in a short span of time. In this study, structural evolutions, magnetic and electronic properties of Fe-Ni based nanoalloys have been studied by using molecular dynamics (MD) method in Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) and Density Functional Theory (DFT) in the Vienna Ab initio Simulation Package (VASP). The effects of particle size (in 2-10 nm particle size range) and temperature (300-1500 K) on stability and structural evolutions of amorphous and crystalline Fe-Ni bulk/nanoalloys have been investigated by combining molecular dynamic (MD) simulation method with Embedded Atom Model (EAM). EAM is applicable for the Fe-Ni based bimetallic systems because it considers both the pairwise interatomic interaction potentials and electron densities. Structural evolution of Fe-Ni bulk and nanoparticles (NPs) have been studied by calculation of radial distribution functions (RDF), interatomic distances, coordination number, core-to-surface concentration profiles as well as Voronoi analysis and surface energy dependences on temperature and particle size. Moreover, spin-polarized DFT calculations were performed by using a plane-wave basis set with generalized gradient approximation (GGA) exchange and correlation effects in the VASP-MedeA package to predict magnetic and electronic properties of the Fe-Ni based alloys in bulk and nanostructured phases. The result of theoretical modeling and simulations for the structural evolutions, magnetic and electronic properties of Fe-Ni based nanostructured alloys were compared with experimental and other theoretical results published in the literature.Keywords: density functional theory, embedded atom model, Fe-Ni systems, molecular dynamics, nanoalloys
Procedia PDF Downloads 24573 Keratin Reconstruction: Evaluation of Green Peptides Technology on Hair Performance
Authors: R. Di Lorenzo, S. Laneri, A. Sacchi
Abstract:
Hair surface properties affect hair texture and shine, whereas the healthy state of the hair cortex sways hair ends. Even if cosmetic treatments are intrinsically safe, there is potentially damaging action on the hair fibers. Loss of luster, frizz, split ends, and other hair problems are particularly prevalent among people who repeatedly alter the natural style of their hair or among people with intrinsically weak hair. Technological and scientific innovations in hair care thus become invaluable allies to preserve their natural well-being and shine. The study evaluated restoring keratin-like ingredients that improve hair fibers' structural integrity, increase tensile strength, improve hair manageability and moisturizing. The hair shaft is composed of 65 - 95% of keratin. It gives the hair resistance, elasticity, and plastic properties and also contributes to their waterproofing. Providing exogenous keratin is, therefore, a practical approach to protect and nourish the hair. By analyzing the amino acid composition of keratin, we find a high frequency of hydrophobic amino acids. It confirms the critical role interactions, mainly hydrophobic, between cosmetic products and hair. The active ingredient analyzed comes from vegetable proteins through an enzymatic cut process that selected only oligo- and polypeptides (> 3500 KDa) rich in amino acids with hydrocarbon side chains apolar or sulfur. These chemical components are the most expressed amino acids at the level of the capillary keratin structure, and it determines the most significant possible compatibility with the target substrate. Given the biological variability of the sources, it isn't easy to define a constant and reproducible molecular formula of the product. Still, it consists of hydroxypropiltrimonium vegetable peptides with keratin-like performances. 20 natural hair tresses (30 cm in length and 0.50 g weight) were treated with the investigated products (5 % v/v aqueous solution) following a specific protocol and compared with non-treated (Control) and benchmark-keratin-treated strands (Benchmark). Their brightness, moisture content, cortical and surface integrity, and tensile strength were evaluated and statistically compared. Keratin-like treated hair tresses showed better results than the other two groups (Control and Benchmark). The product improves the surface with significant regularization of the cuticle closure, improves the cortex and the peri-medullar area filling, gives a highly organized and tidy structure, delivers a significant amount of sulfur on the hair, and is more efficient moisturization and imbibition power, increases hair brightness. The hydroxypropyltrimonium quaternized group added to the C-terminal end interacts with the negative charges that form on the hair after washing when disheveled and tangled. The interactions anchor the product to the hair surface, keeping the cuticles adhered to the shaft. The small size allows the peptides to penetrate and give body to the hair, together with a conditioning effect that gives an image of healthy hair. Results suggest that the product is a valid ally in numerous restructuring/conditioning, shaft protection, straightener/dryer-damage prevention hair care product.Keywords: conditioning, hair damage, hair, keratin, polarized light microscopy, scanning electron microscope, thermogravimetric analysis
Procedia PDF Downloads 12572 Effects of Heart Rate Variability Biofeedback to Improve Autonomic Nerve Function, Inflammatory Response and Symptom Distress in Patients with Chronic Kidney Disease: A Randomized Control Trial
Authors: Chia-Pei Chen, Yu-Ju Chen, Yu-Juei Hsu
Abstract:
The prevalence and incidence of end-stage renal disease in Taiwan ranks the highest in the world. According to the statistical survey of the Ministry of Health and Welfare in 2019, kidney disease is the ninth leading cause of death in Taiwan. It leads to autonomic dysfunction, inflammatory response and symptom distress, and further increases the damage to the structure and function of the kidneys, leading to increased demand for renal replacement therapy and risks of cardiovascular disease, which also has medical costs for the society. If we can intervene in a feasible manual to effectively regulate the autonomic nerve function of CKD patients, reduce the inflammatory response and symptom distress. To prolong the progression of the disease, it will be the main goal of caring for CKD patients. This study aims to test the effect of heart rate variability biofeedback (HRVBF) on improving autonomic nerve function (Heart Rate Variability, HRV), inflammatory response (Interleukin-6 [IL-6], C reaction protein [CRP] ), symptom distress (Piper fatigue scale, Pittsburgh Sleep Quality Index [PSQI], and Beck Depression Inventory-II [BDI-II] ) in patients with chronic kidney disease. This study was experimental research, with a convenience sampling. Participants were recruited from the nephrology clinic at a medical center in northern Taiwan. With signed informed consent, participants were randomly assigned to the HRVBF or control group by using the Excel BINOMDIST function. The HRVBF group received four weekly hospital-based HRVBF training, and 8 weeks of home-based self-practice was done with StressEraser. The control group received usual care. We followed all participants for 3 months, in which we repeatedly measured their autonomic nerve function (HRV), inflammatory response (IL-6, CRP), and symptom distress (Piper fatigue scale, PSQI, and BDI-II) on their first day of study participation (baselines), 1 month, and 3 months after the intervention to test the effects of HRVBF. The results were analyzed by SPSS version 23.0 statistical software. The data of demographics, HRV, IL-6, CRP, Piper fatigue scale, PSQI, and BDI-II were analyzed by descriptive statistics. To test for differences between and within groups in all outcome variables, it was used by paired sample t-test, independent sample t-test, Wilcoxon Signed-Rank test and Mann-Whitney U test. Results: Thirty-four patients with chronic kidney disease were enrolled, but three of them were lost to follow-up. The remaining 31 patients completed the study, including 15 in the HRVBF group and 16 in the control group. The characteristics of the two groups were not significantly different. The four-week hospital-based HRVBF training combined with eight-week home-based self-practice can effectively enhance the parasympathetic nerve performance for patients with chronic kidney disease, which may against the disease-related parasympathetic nerve inhibition. In the inflammatory response, IL-6 and CRP in the HRVBF group could not achieve significant improvement when compared with the control group. Self-reported fatigue and depression significantly decreased in the HRVBF group, but they still failed to achieve a significant difference between the two groups. HRVBF has no significant effect on improving the sleep quality for CKD patients.Keywords: heart rate variability biofeedback, autonomic nerve function, inflammatory response, symptom distress, chronic kidney disease
Procedia PDF Downloads 18171 Analytical Model of Locomotion of a Thin-Film Piezoelectric 2D Soft Robot Including Gravity Effects
Authors: Zhiwu Zheng, Prakhar Kumar, Sigurd Wagner, Naveen Verma, James C. Sturm
Abstract:
Soft robots have drawn great interest recently due to a rich range of possible shapes and motions they can take on to address new applications, compared to traditional rigid robots. Large-area electronics (LAE) provides a unique platform for creating soft robots by leveraging thin-film technology to enable the integration of a large number of actuators, sensors, and control circuits on flexible sheets. However, the rich shapes and motions possible, especially when interacting with complex environments, pose significant challenges to forming well-generalized and robust models necessary for robot design and control. In this work, we describe an analytical model for predicting the shape and locomotion of a flexible (steel-foil-based) piezoelectric-actuated 2D robot based on Euler-Bernoulli beam theory. It is nominally (unpowered) lying flat on the ground, and when powered, its shape is controlled by an array of piezoelectric thin-film actuators. Key features of the models are its ability to incorporate the significant effects of gravity on the shape and to precisely predict the spatial distribution of friction against the contacting surfaces, necessary for determining inchworm-type motion. We verified the model by developing a distributed discrete element representation of a continuous piezoelectric actuator and by comparing its analytical predictions to discrete-element robot simulations using PyBullet. Without gravity, predicting the shape of a sheet with a linear array of piezoelectric actuators at arbitrary voltages is straightforward. However, gravity significantly distorts the shape of the sheet, causing some segments to flatten against the ground. Our work includes the following contributions: (i) A self-consistent approach was developed to exactly determine which parts of the soft robot are lifted off the ground, and the exact shape of these sections, for an arbitrary array of piezoelectric voltages and configurations. (ii) Inchworm-type motion relies on controlling the relative friction with the ground surface in different sections of the robot. By adding torque-balance to our model and analyzing shear forces, the model can then determine the exact spatial distribution of the vertical force that the ground is exerting on the soft robot. Through this, the spatial distribution of friction forces between ground and robot can be determined. (iii) By combining this spatial friction distribution with the shape of the soft robot, in the function of time as piezoelectric actuator voltages are changed, the inchworm-type locomotion of the robot can be determined. As a practical example, we calculated the performance of a 5-actuator system on a 50-µm thick steel foil. Piezoelectric properties of commercially available thin-film piezoelectric actuators were assumed. The model predicted inchworm motion of up to 200 µm per step. For independent verification, we also modelled the system using PyBullet, a discrete-element robot simulator. To model a continuous thin-film piezoelectric actuator, we broke each actuator into multiple segments, each of which consisted of two rigid arms with appropriate mass connected with a 'motor' whose torque was set by the applied actuator voltage. Excellent agreement between our analytical model and the discrete-element simulator was shown for both for the full deformation shape and motion of the robot.Keywords: analytical modeling, piezoelectric actuators, soft robot locomotion, thin-film technology
Procedia PDF Downloads 18170 Hydrogen Production Using an Anion-Exchange Membrane Water Electrolyzer: Mathematical and Bond Graph Modeling
Authors: Hugo Daneluzzo, Christelle Rabbat, Alan Jean-Marie
Abstract:
Water electrolysis is one of the most advanced technologies for producing hydrogen and can be easily combined with electricity from different sources. Under the influence of electric current, water molecules can be split into oxygen and hydrogen. The production of hydrogen by water electrolysis favors the integration of renewable energy sources into the energy mix by compensating for their intermittence through the storage of the energy produced when production exceeds demand and its release during off-peak production periods. Among the various electrolysis technologies, anion exchange membrane (AEM) electrolyser cells are emerging as a reliable technology for water electrolysis. Modeling and simulation are effective tools to save time, money, and effort during the optimization of operating conditions and the investigation of the design. The modeling and simulation become even more important when dealing with multiphysics dynamic systems. One of those systems is the AEM electrolysis cell involving complex physico-chemical reactions. Once developed, models may be utilized to comprehend the mechanisms to control and detect flaws in the systems. Several modeling methods have been initiated by scientists. These methods can be separated into two main approaches, namely equation-based modeling and graph-based modeling. The former approach is less user-friendly and difficult to update as it is based on ordinary or partial differential equations to represent the systems. However, the latter approach is more user-friendly and allows a clear representation of physical phenomena. In this case, the system is depicted by connecting subsystems, so-called blocks, through ports based on their physical interactions, hence being suitable for multiphysics systems. Among the graphical modelling methods, the bond graph is receiving increasing attention as being domain-independent and relying on the energy exchange between the components of the system. At present, few studies have investigated the modelling of AEM systems. A mathematical model and a bond graph model were used in previous studies to model the electrolysis cell performance. In this study, experimental data from literature were simulated using OpenModelica using bond graphs and mathematical approaches. The polarization curves at different operating conditions obtained by both approaches were compared with experimental ones. It was stated that both models predicted satisfactorily the polarization curves with error margins lower than 2% for equation-based models and lower than 5% for the bond graph model. The activation polarization of hydrogen evolution reactions (HER) and oxygen evolution reactions (OER) were behind the voltage loss in the AEM electrolyzer, whereas ion conduction through the membrane resulted in the ohmic loss. Therefore, highly active electro-catalysts are required for both HER and OER while high-conductivity AEMs are needed for effectively lowering the ohmic losses. The bond graph simulation of the polarisation curve for operating conditions at various temperatures has illustrated that voltage increases with temperature owing to the technology of the membrane. Simulation of the polarisation curve can be tested virtually, hence resulting in reduced cost and time involved due to experimental testing and improved design optimization. Further improvements can be made by implementing the bond graph model in a real power-to-gas-to-power scenario.Keywords: hydrogen production, anion-exchange membrane, electrolyzer, mathematical modeling, multiphysics modeling
Procedia PDF Downloads 9369 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning
Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher
Abstract:
Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping
Procedia PDF Downloads 13868 A Copula-Based Approach for the Assessment of Severity of Illness and Probability of Mortality: An Exploratory Study Applied to Intensive Care Patients
Authors: Ainura Tursunalieva, Irene Hudson
Abstract:
Continuous improvement of both the quality and safety of health care is an important goal in Australia and internationally. The intensive care unit (ICU) receives patients with a wide variety of and severity of illnesses. Accurately identifying patients at risk of developing complications or dying is crucial to increasing healthcare efficiency. Thus, it is essential for clinicians and researchers to have a robust framework capable of evaluating the risk profile of a patient. ICU scoring systems provide such a framework. The Acute Physiology and Chronic Health Evaluation III and the Simplified Acute Physiology Score II are ICU scoring systems frequently used for assessing the severity of acute illness. These scoring systems collect multiple risk factors for each patient including physiological measurements then render the assessment outcomes of individual risk factors into a single numerical value. A higher score is related to a more severe patient condition. Furthermore, the Mortality Probability Model II uses logistic regression based on independent risk factors to predict a patient’s probability of mortality. An important overlooked limitation of SAPS II and MPM II is that they do not, to date, include interaction terms between a patient’s vital signs. This is a prominent oversight as it is likely there is an interplay among vital signs. The co-existence of certain conditions may pose a greater health risk than when these conditions exist independently. One barrier to including such interaction terms in predictive models is the dimensionality issue as it becomes difficult to use variable selection. We propose an innovative scoring system which takes into account a dependence structure among patient’s vital signs, such as systolic and diastolic blood pressures, heart rate, pulse interval, and peripheral oxygen saturation. Copulas will capture the dependence among normally distributed and skewed variables as some of the vital sign distributions are skewed. The estimated dependence parameter will then be incorporated into the traditional scoring systems to adjust the points allocated for the individual vital sign measurements. The same dependence parameter will also be used to create an alternative copula-based model for predicting a patient’s probability of mortality. The new copula-based approach will accommodate not only a patient’s trajectories of vital signs but also the joint dependence probabilities among the vital signs. We hypothesise that this approach will produce more stable assessments and lead to more time efficient and accurate predictions. We will use two data sets: (1) 250 ICU patients admitted once to the Chui Regional Hospital (Kyrgyzstan) and (2) 37 ICU patients’ agitation-sedation profiles collected by the Hunter Medical Research Institute (Australia). Both the traditional scoring approach and our copula-based approach will be evaluated using the Brier score to indicate overall model performance, the concordance (or c) statistic to indicate the discriminative ability (or area under the receiver operating characteristic (ROC) curve), and goodness-of-fit statistics for calibration. We will also report discrimination and calibration values and establish visualization of the copulas and high dimensional regions of risk interrelating two or three vital signs in so-called higher dimensional ROCs.Keywords: copula, intensive unit scoring system, ROC curves, vital sign dependence
Procedia PDF Downloads 15367 Made on Land, Ends Up in the Water "I-Clare" Intelligent Remediation System for Removal of Harmful Contaminants in Water using Modified Reticulated Vitreous Carbon Foam
Authors: Sabina Żołędowska, Tadeusz Ossowski, Robert Bogdanowicz, Jacek Ryl, Paweł Rostkowski, Michał Kruczkowski, Michał Sobaszek, Zofia Cebula, Grzegorz Skowierzak, Paweł Jakóbczyk, Lilit Hovhannisyan, Paweł Ślepski, Iwona Kaczmarczyk, Mattia Pierpaoli, Bartłomiej Dec, Dawid Nidzworski
Abstract:
The circular economy of water presents a pressing environmental challenge in our society. Water contains various harmful substances, such as drugs, antibiotics, hormones, and dioxides, which can pose silent threats. Water pollution has severe consequences for aquatic ecosystems. It disrupts the balance of ecosystems by harming aquatic plants, animals, and microorganisms. Water pollution poses significant risks to human health. Exposure to toxic chemicals through contaminated water can have long-term health effects, such as cancer, developmental disorders, and hormonal imbalances. However, effective remediation systems can be implemented to remove these contaminants using electrocatalytic processes, which offer an environmentally friendly alternative to other treatment methods, and one of them is the innovative iCLARE system. The project's primary focus revolves around a few main topics: Reactor design and construction, selection of a specific type of reticulated vitreous carbon foams (RVC), analytical studies of harmful contaminants parameters and AI implementation. This high-performance electrochemical reactor will be build based on a novel type of electrode material. The proposed approach utilizes the application of reticulated vitreous carbon foams (RVC) with deposited modified metal oxides (MMO) and diamond thin films. The following setup is characterized by high surface area development and satisfactory mechanical and electrochemical properties, designed for high electrocatalytic process efficiency. The consortium validated electrode modification methods that are the base of the iCLARE product and established the procedures for the detection of chemicals detection: - deposition of metal oxides WO3 and V2O5-deposition of boron-doped diamond/nanowalls structures by CVD process. The chosen electrodes (porous Ferroterm electrodes) were stress tested for various parameters that might occur inside the iCLARE machine–corosis, the long-term structure of the electrode surface during electrochemical processes, and energetic efficacy using cyclic polarization and electrochemical impedance spectroscopy (before and after electrolysis) and dynamic electrochemical impedance spectroscopy (DEIS). This tool allows real-time monitoring of the changes at the electrode/electrolyte interphase. On the other hand, the toxicity of iCLARE chemicals and products of electrolysis are evaluated before and after the treatment using MARA examination (IBMM) and HPLC-MS-MS (NILU), giving us information about the harmfulness of using electrode material and the efficiency of iClare system in the disposal of pollutants. Implementation of data into the system that uses artificial intelligence and the possibility of practical application is in progress (SensDx).Keywords: waste water treatement, RVC, electrocatalysis, paracetamol
Procedia PDF Downloads 8966 Integrated Mathematical Modeling and Advance Visualization of Magnetic Nanoparticle for Drug Delivery, Drug Release and Effects to Cancer Cell Treatment
Authors: Norma Binti Alias, Che Rahim Che The, Norfarizan Mohd Said, Sakinah Abdul Hanan, Akhtar Ali
Abstract:
This paper discusses on the transportation of magnetic drug targeting through blood within vessels, tissues and cells. There are three integrated mathematical models to be discussed and analyze the concentration of drug and blood flow through magnetic nanoparticles. The cell therapy brought advancement in the field of nanotechnology to fight against the tumors. The systematic therapeutic effect of Single Cells can reduce the growth of cancer tissue. The process of this nanoscale phenomena system is able to measure and to model, by identifying some parameters and applying fundamental principles of mathematical modeling and simulation. The mathematical modeling of single cell growth depends on three types of cell densities such as proliferative, quiescent and necrotic cells. The aim of this paper is to enhance the simulation of three types of models. The first model represents the transport of drugs by coupled partial differential equations (PDEs) with 3D parabolic type in a cylindrical coordinate system. This model is integrated by Non-Newtonian flow equations, leading to blood liquid flow as the medium for transportation system and the magnetic force on the magnetic nanoparticles. The interaction between the magnetic force on drug with magnetic properties produces induced currents and the applied magnetic field yields forces with tend to move slowly the movement of blood and bring the drug to the cancer cells. The devices of nanoscale allow the drug to discharge the blood vessels and even spread out through the tissue and access to the cancer cells. The second model is the transport of drug nanoparticles from the vascular system to a single cell. The treatment of the vascular system encounters some parameter identification such as magnetic nanoparticle targeted delivery, blood flow, momentum transport, density and viscosity for drug and blood medium, intensity of magnetic fields and the radius of the capillary. Based on two discretization techniques, finite difference method (FDM) and finite element method (FEM), the set of integrated models are transformed into a series of grid points to get a large system of equations. The third model is a single cell density model involving the three sets of first order PDEs equations for proliferating, quiescent and necrotic cells change over time and space in Cartesian coordinate which regulates under different rates of nutrients consumptions. The model presents the proliferative and quiescent cell growth depends on some parameter changes and the necrotic cells emerged as the tumor core. Some numerical schemes for solving the system of equations are compared and analyzed. Simulation and computation of the discretized model are supported by Matlab and C programming languages on a single processing unit. Some numerical results and analysis of the algorithms are presented in terms of informative presentation of tables, multiple graph and multidimensional visualization. As a conclusion, the integrated of three types mathematical modeling and the comparison of numerical performance indicates that the superior tool and analysis for solving the complete set of magnetic drug delivery system which give significant effects on the growth of the targeted cancer cell.Keywords: mathematical modeling, visualization, PDE models, magnetic nanoparticle drug delivery model, drug release model, single cell effects, avascular tumor growth, numerical analysis
Procedia PDF Downloads 42865 Impact of Lack of Testing on Patient Recovery in the Early Phase of COVID-19: Narratively Collected Perspectives from a Remote Monitoring Program
Authors: Nicki Mohammadi, Emma Reford, Natalia Romano Spica, Laura Tabacof, Jenna Tosto-Mancuso, David Putrino, Christopher P. Kellner
Abstract:
Introductory Statement: The onset of the COVID-19 pandemic demanded an unprecedented need for the rapid development, dispersal, and application of infection testing. However, despite the impressive mobilization of resources, individuals were incredibly limited in their access to tests, particularly during the initial months of the pandemic (March-April 2020) in New York City (NYC). Access to COVID-19 testing is crucial in understanding patients’ illness experiences and integral to the development of COVID-19 standard-of-care protocols, especially in the context of overall access to healthcare resources. Succinct Description of basic methodologies: 18 Patients in a COVID-19 Remote Patient Monitoring Program (Precision Recovery within the Mount Sinai Health System) were interviewed regarding their experience with COVID-19 during the first wave (March-May 2020) of the COVID-19 pandemic in New York City. Patients were asked about their experiences navigating COVID-19 diagnoses, the health care system, and their recovery process. Transcribed interviews were analyzed for thematic codes, using grounded theory to guide the identification of emergent themes and codebook development through an iterative process. Data coding was performed using NVivo12. References for the domain “testing” were then extracted and analyzed for themes and statistical patterns. Clear Indication of Major Findings of the study: 100% of participants (18/18) referenced COVID-19 testing in their interviews, with a total of 79 references across the 18 transcripts (average: 4.4 references/interview; 2.7% interview coverage). 89% of participants (16/18) discussed the difficulty of access to testing, including denial of testing without high severity of symptoms, geographical distance to the testing site, and lack of testing resources at healthcare centers. Participants shared varying perspectives on how the lack of certainty regarding their COVID-19 status affected their course of recovery. One participant shared that because she never tested positive she was shielded from her anxiety and fear, given the death toll in NYC. Another group of participants shared that not having a concrete status to share with family, friends and professionals affected how seriously onlookers took their symptoms. Furthermore, the absence of a positive test barred some individuals from access to treatment programs and employment support. Concluding Statement: Lack of access to COVID-19 testing in the first wave of the pandemic in NYC was a prominent element of patients’ illness experience, particularly during their recovery phase. While for some the lack of concrete results was protective, most emphasized the invalidating effect this had on the perception of illness for both self and others. COVID-19 testing is now widely accessible; however, those who are unable to demonstrate a positive test result but who are still presumed to have had COVID-19 in the first wave must continue to adapt to and live with the effects of this gap in knowledge and care on their recovery. Future efforts are required to ensure that patients do not face barriers to care due to the lack of testing and are reassured regarding their access to healthcare. Affiliations- 1Department of Neurosurgery, Icahn School of Medicine at Mount Sinai, New York, NY 2Abilities Research Center, Department of Rehabilitation and Human Performance, Icahn School of Medicine at Mount Sinai, New York, NYKeywords: accessibility, COVID-19, recovery, testing
Procedia PDF Downloads 19664 Microstructural Characterization of Bitumen/Montmorillonite/Isocyanate Composites by Atomic Force Microscopy
Authors: Francisco J. Ortega, Claudia Roman, Moisés García-Morales, Francisco J. Navarro
Abstract:
Asphaltic bitumen has been largely used in both industrial and civil engineering, mostly in pavement construction and roofing membrane manufacture. However, bitumen as such is greatly susceptible to temperature variations, and dramatically changes its in-service behavior from a viscoelastic liquid, at medium-high temperatures, to a brittle solid at low temperatures. Bitumen modification prevents these problems and imparts improved performance. Isocyanates like polymeric MDI (mixture of 4,4′-diphenylmethane di-isocyanate, 2,4’ and 2,2’ isomers, and higher homologues) have shown to remarkably enhance bitumen properties at the highest in-service temperatures expected. This comes from the reaction between the –NCO pendant groups of the oligomer and the most polar groups of asphaltenes and resins in bitumen. In addition, oxygen diffusion and/or UV radiation may provoke bitumen hardening and ageing. With the purpose of minimizing these effects, nano-layered-silicates (nanoclays) are increasingly being added to bitumen formulations. Montmorillonites, a type of naturally occurring mineral, may produce a nanometer scale dispersion which improves bitumen thermal, mechanical and barrier properties. In order to increase their lipophilicity, these nanoclays are normally treated so that organic cations substitute the inorganic cations located in their intergallery spacing. In the present work, the combined effect of polymeric MDI and the commercial montmorillonite Cloisite® 20A was evaluated. A selected bitumen with penetration within the range 160/220 was modified with 10 wt.% Cloisite® 20A and 2 wt.% polymeric MDI, and the resulting ternary composites were characterized by linear rheology, X-ray diffraction (XRD) and Atomic Force Microscopy (AFM). The rheological tests evidenced a notable solid-like behavior at the highest temperatures studied when bitumen was just loaded with 10 wt.% Cloisite® 20A and high-shear blended for 20 minutes. However, if polymeric MDI was involved, the sequence of addition exerted a decisive control on the linear rheology of the final ternary composites. Hence, in bitumen/Cloisite® 20A/polymeric MDI formulations, the previous solid-like behavior disappeared. By contrast, an inversion in the order of addition (bitumen/polymeric MDI/ Cloisite® 20A) enhanced further the solid-like behavior imparted by the nanoclay. In order to gain a better understanding of the factors that govern the linear rheology of these ternary composites, a morphological and microstructural characterization based on XRD and AFM was conducted. XRD demonstrated the existence of clay stacks intercalated by bitumen molecules to some degree. However, the XRD technique cannot provide detailed information on the extent of nanoclay delamination, unless the entire fraction has effectively been fully delaminated (situation in which no peak is observed). Furthermore, XRD was unable to provide precise knowledge neither about the spatial distribution of the intercalated/exfoliated platelets nor about the presence of other structures at larger length scales. In contrast, AFM proved its power at providing conclusive information on the morphology of the composites at the nanometer scale and at revealing the structural modification that yielded the rheological properties observed. It was concluded that high-shear blending brought about a nanoclay-reinforced network. As for the bitumen/Cloisite® 20A/polymeric MDI formulations, the solid-like behavior was destroyed as a result of the agglomeration of the nanoclay platelets promoted by chemical reactions.Keywords: Atomic Force Microscopy, bitumen, composite, isocyanate, montmorillonite.
Procedia PDF Downloads 26163 Effect of Cerebellar High Frequency rTMS on the Balance of Multiple Sclerosis Patients with Ataxia
Authors: Shereen Ismail Fawaz, Shin-Ichi Izumi, Nouran Mohamed Salah, Heba G. Saber, Ibrahim Mohamed Roushdi
Abstract:
Background: Multiple sclerosis (MS) is a chronic, inflammatory, mainly demyelinating disease of the central nervous system, more common in young adults. Cerebellar involvement is one of the most disabling lesions in MS and is usually a sign of disease progression. It plays a major role in the planning, initiation, and organization of movement via its influence on the motor cortex and corticospinal outputs. Therefore, it contributes to controlling movement, motor adaptation, and motor learning, in addition to its vast connections with other major pathways controlling balance, such as the cerebellopropriospinal pathways and cerebellovestibular pathways. Hence, trying to stimulate the cerebellum by facilitatory protocols will add to our motor control and balance function. Non-invasive brain stimulation, both repetitive transcranial magnetic stimulation (rTMS) and transcranial direct current stimulation (tDCS), has recently emerged as effective neuromodulators to influence motor and nonmotor functions of the brain. Anodal tDCS has been shown to improve motor skill learning and motor performance beyond the training period. Similarly, rTMS, when used at high frequency (>5 Hz), has a facilitatory effect on the motor cortex. Objective: Our aim was to determine the effect of high-frequency rTMS over the cerebellum in improving balance and functional ambulation of multiple sclerosis patients with Ataxia. Patients and methods: This was a randomized single-blinded placebo-controlled prospective trial on 40 patients. The active group (N=20) received real rTMS sessions, and the control group (N=20) received Sham rTMS using a placebo program designed for this treatment. Both groups received 12 sessions of high-frequency rTMS over the cerebellum, followed by an intensive exercise training program. Sessions were given three times per week for four weeks. The active group protocol had a frequency of 10 Hz rTMS over the cerebellar vermis, work period 5S, number of trains 25, and intertrain interval 25s. The total number of pulses was 1250 pulses per session. The control group received Sham rTMS using a placebo program designed for this treatment. Both groups of patients received an intensive exercise program, which included generalized strengthening exercises, endurance and aerobic training, trunk abdominal exercises, generalized balance training exercises, and task-oriented training such as Boxing. As a primary outcome measure the Modified ICARS was used. Static Posturography was done with: Patients were tested both with open and closed eyes. Secondary outcome measures included the expanded Disability Status Scale (EDSS) and 8 Meter walk test (8MWT). Results: The active group showed significant improvements in all the functional scales, modified ICARS, EDSS, and 8-meter walk test, in addition to significant differences in static Posturography with open eyes, while the control group did not show such differences. Conclusion: Cerebellar high-frequency rTMS could be effective in the functional improvement of balance in MS patients with ataxia.Keywords: brain neuromodulation, high frequency rTMS, cerebellar stimulation, multiple sclerosis, balance rehabilitation
Procedia PDF Downloads 9262 Agenesis of the Corpus Callosum: The Role of Neuropsychological Assessment with Implications to Psychosocial Rehabilitation
Authors: Ron Dick, P. S. D. V. Prasadarao, Glenn Coltman
Abstract:
Agenesis of the corpus callosum (ACC) is a failure to develop corpus callosum - the large bundle of fibers of the brain that connects the two cerebral hemispheres. It can occur as a partial or complete absence of the corpus callosum. In the general population, its estimated prevalence rate is 1 in 4000 and a wide range of genetic, infectious, vascular, and toxic causes have been attributed to this heterogeneous condition. The diagnosis of ACC is often achieved by neuroimaging procedures. Though persons with ACC can perform normally on intelligence tests they generally present with a range of neuropsychological and social deficits. The deficit profile is characterized by poor coordination of motor movements, slow reaction time, processing speed and, poor memory. Socially, they present with deficits in communication, language processing, the theory of mind, and interpersonal relationships. The present paper illustrates the role of neuropsychological assessment with implications to psychosocial management in a case of agenesis of the corpus callosum. Method: A 27-year old left handed Caucasian male with a history of ACC was self-referred for a neuropsychological assessment to assist him in his employment options. Parents noted significant difficulties with coordination and balance at an early age of 2-3 years and he was diagnosed with dyspraxia at the age of 14 years. History also indicated visual impairment, hypotonia, poor muscle coordination, and delayed development of motor milestones. MRI scan indicated agenesis of the corpus callosum with ventricular morphology, widely spaced parallel lateral ventricles and mild dilatation of the posterior horns; it also showed colpocephaly—a disproportionate enlargement of the occipital horns of the lateral ventricles which might be affecting his motor abilities and visual defects. The MRI scan ruled out other structural abnormalities or neonatal brain injury. At the time of assessment, the subject presented with such problems as poor coordination, slowed processing speed, poor organizational skills and time management, and difficulty with social cues and facial expressions. A comprehensive neuropsychological assessment was planned and conducted to assist in identifying the current neuropsychological profile to facilitate the formulation of a psychosocial and occupational rehabilitation programme. Results: General intellectual functioning was within the average range and his performance on memory-related tasks was adequate. Significant visuospatial and visuoconstructional deficits were evident across tests; constructional difficulties were seen in tasks such as copying a complex figure, building a tower and manipulating blocks. Poor visual scanning ability and visual motor speed were evident. Socially, the subject reported heightened social anxiety, difficulty in responding to cues in the social environment, and difficulty in developing intimate relationships. Conclusion: Persons with ACC are known to present with specific cognitive deficits and problems in social situations. Findings from the current neuropsychological assessment indicated significant visuospatial difficulties, poor visual scanning and problems in social interactions. His general intellectual functioning was within the average range. Based on the findings from the comprehensive neuropsychological assessment, a structured psychosocial rehabilitation programme was developed and recommended.Keywords: agenesis, callosum, corpus, neuropsychology, psychosocial, rehabilitation
Procedia PDF Downloads 27661 The Impact of β Nucleating Agents and Carbon-Based Nanomaterials on Water Vapor Permeability of Polypropylene Composite Films
Authors: Glykeria A. Visvini, George Ν. Mathioudakis, Amaia Soto Beobide, George A. Voyiatzis
Abstract:
Polymer nanocomposites are materials in which a polymer matrix is reinforced with nanoscale inclusions, such as nanoparticles, nanoplates, or nanofibers. These nanoscale inclusions can significantly enhance the mechanical, thermal, electrical, and other properties of the polymer matrix, making them attractive for a wide range of industrial applications. These properties can be tailored by adjusting the type and the concentration of the nanoinclusions, which provides a high degree of flexibility in their design and development. An important property that polymeric membranes can exhibit is water vapor permeability (WVP). This can be accomplished by various methods, including the incorporation of micro/nano-fillers into the polymer matrix. In this way, a micro/nano-pore network can be formed, allowing water vapor to permeate through the membrane. At the same time, the membrane can be stretched uni- or bi-axially, creating aligned or cross-linked micropores in the composite, respectively, which can also increase the WVP. Nowadays, in industry, stretched films reinforced with CaCO3 develop micro-porosity sufficient to give them breathability characteristics. Carbon-based nanomaterials, such as graphene oxide (GO), are tentatively expected to be able to effectively improve the WVP of corresponding composite polymer films. The presence in the GO structure of various functional oxidizing groups enhances its ability to attract and channel water molecules, exploiting the unique large surface area of graphene that allows the rapid transport of water molecules. Polypropylene (PP) is widely used in various industrial applications due to its desirable properties, including good chemical resistance, excellent thermal stability, low cost, and easy processability. The specific properties of PP are highly influenced by its crystalline behavior, which is determined by its processing conditions. The development of the β-crystalline phase in PP, in combination with stretching, is anticipating improving the microporosity of the polymer matrix, thereby enhancing its WVP. The aim of present study is to create breathable PP composite membranes using carbon-based nanomaterials, such as graphene oxide (GO), reduced graphene oxide (rGO), and graphene nanoplatelets (GNPs). Unlike traditional methods that rely on the drawing process to enhance the WVP of PP, this study intents to develop a low-cost approach using melt mixing with β-nucleating agents and carbon fillers to create highly breathable PP composite membranes. The study aims to investigate how the concentration of these additives affects the water vapor transport properties of the resulting PP films/membranes. The presence of β-nucleating agents and carbon fillers is expected to enhance β-phase growth in PP, while an alternation between β- and α-phase is expected to lead to improved microporosity and WVP. Our ambition is to develop highly breathable PP composite films with superior performance and at a lower cost compared to the benchmark. Acknowledgment: This research has been co‐financed by the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call «Special Actions "AQUACULTURE"-"INDUSTRIAL MATERIALS"-"OPEN INNOVATION IN CULTURE"» (project code: Τ6YBP-00337)Keywords: carbon based nanomaterials, nanocomposites, nucleating agent, polypropylene, water vapor permeability
Procedia PDF Downloads 8760 The Shrinking of the Pink Wave and the Rise of the Right-Wing in Latin America
Authors: B. M. Moda, L. F. Secco
Abstract:
Through free and fair elections and others less democratic processes, Latin America has been gradually turning into a right-wing political region. In order to understand these recent changes, this paper aims to discuss the origin and the traits of the pink wave in the subcontinent, the reasons for its current rollback and future projections for left-wing in the region. The methodology used in this paper will be descriptive and analytical combined with secondary sources mainly from the social and political sciences fields. The canons of the Washington Consensus was implemented by the majority of the Latin American governments in the 80s and 90s under the social democratic and right-wing parties. The neoliberal agenda caused political, social and economic dissatisfaction bursting into a new political configuration for the region. It started in 1998 when Hugo Chávez took the office in Venezuela through the Fifth Republic Movement under the socialist flag. From there on, Latin America was swiped by the so-called ‘pink wave’, term adopted to define the rising of self-designated left-wing or center-left parties with a progressive agenda. After Venezuela, countries like Chile, Brazil, Argentina, Uruguay, Bolivia, Equator, Nicaragua, Paraguay, El Salvador and Peru got into the pink wave. The success of these governments was due a post-neoliberal agenda focused on cash transfers programs, increasing of public spending, and the straightening of national market. The discontinuation of the preference for the left-wing started in 2012 with the coup against Fernando Lugo in Paraguay. In 2015, the chavismo in Venezuela lost the majority of the legislative seats. In 2016, an impeachment removed the Brazilian president Dilma Rousself from office who was replaced by the center-right vice-president Michel Temer. In the same year, Mauricio Macri representing the right-wing party Proposta Republicana was elected in Argentina. In 2016 center-right and liberal, Pedro Pablo Kuczynski was elected in Peru. In 2017, Sebastián Piñera was elected in Chile through the center-right party Renovación Nacional. The pink wave current rollback points towards some findings that can be arranged in two fields. Economically, the 2008 financial crisis affected the majority of the Latin American countries and the left-wing economic policies along with the end of the raw materials boom and the subsequent shrinking of economic performance opened a flank for popular dissatisfaction. In Venezuela, the 2014 oil crisis reduced the revenues for the State in more than 50% dropping social spending, creating an inflationary spiral, and consequently loss of popular support. Politically, the death of Hugo Chavez in 2013 weakened the ‘socialism of the twenty first century’ ideal, which was followed by the death of Fidel Castro, the last bastion of communism in the subcontinent. In addition, several cases of corruption revealed during the pink wave governments made the traditional politics unpopular. These issues challenge the left-wing to develop a future agenda based on innovation of its economic program, improve its legal and political compliance practices, and to regroup its electoral forces amid the social movements that supported its ascension back in the early 2000s.Keywords: Latin America, political parties, left-wing, right-wing, pink wave
Procedia PDF Downloads 24159 Flood Risk Assessment for Agricultural Production in a Tropical River Delta Considering Climate Change
Authors: Chandranath Chatterjee, Amina Khatun, Bhabagrahi Sahoo
Abstract:
With the changing climate, precipitation events are intensified in the tropical river basins. Since these river basins are significantly influenced by the monsoonal rainfall pattern, critical impacts are observed on the agricultural practices in the downstream river reaches. This study analyses the crop damage and associated flood risk in terms of net benefit in the paddy-dominated tropical Indian delta of the Mahanadi River. The Mahanadi River basin lies in eastern part of the Indian sub-continent and is greatly affected by the southwest monsoon rainfall extending from the month of June to September. This river delta is highly flood-prone and has suffered from recurring high floods, especially after the 2000s. In this study, the lumped conceptual model, Nedbør Afstrømnings Model (NAM) from the suite of MIKE models, is used for rainfall-runoff modeling. The NAM model is laterally integrated with the MIKE11-Hydrodynamic (HD) model to route the runoffs up to the head of the delta region. To obtain the precipitation-derived future projected discharges at the head of the delta, nine Global Climate Models (GCMs), namely, BCC-CSM1.1(m), GFDL-CM3, GFDL-ESM2G, HadGEM2-AO, IPSL-CM5A-LR, IPSL-CM5A-MR, MIROC5, MIROC-ESM-CHEM and NorESM1-M, available in the Coupled Model Intercomparison Project-Phase 5 (CMIP5) archive are considered. These nine GCMs are previously found to best-capture the Indian Summer Monsoon rainfall. Based on the performance of the nine GCMs in reproducing the historical discharge pattern, three GCMs (HadGEM2-AO, IPSL-CM5A-MR and MIROC-ESM-CHEM) are selected. A higher Taylor Skill Score is considered as the GCM selection criteria. Thereafter, the 10-year return period design flood is estimated using L-moments based flood frequency analysis for the historical and three future projected periods (2010-2039, 2040-2069 and 2070-2099) under Representative Concentration Pathways (RCP) 4.5 and 8.5. A non-dimensional hydrograph analysis is performed to obtain the hydrographs for the historical/projected 10-year return period design floods. These hydrographs are forced into the calibrated and validated coupled 1D-2D hydrodynamic model, MIKE FLOOD, to simulate the flood inundation in the delta region. Historical and projected flood risk is defined based on the information about the flood inundation simulated by the MIKE FLOOD model and the inundation depth-damage-duration relationship of a normal rice variety cultivated in the river delta. In general, flood risk is expected to increase in all the future projected time periods as compared to the historical episode. Further, in comparison to the 2010s (2010-2039), an increased flood risk in the 2040s (2040-2069) is shown by all the three selected GCMs. However, the flood risk then declines in the 2070s as we move towards the end of the century (2070-2099). The methodology adopted herein for flood risk assessment is one of its kind and may be implemented in any world-river basin. The results obtained from this study can help in future flood preparedness by implementing suitable flood adaptation strategies.Keywords: flood frequency analysis, flood risk, global climate models (GCMs), paddy cultivation
Procedia PDF Downloads 7558 Point-of-Decision Design (PODD) to Support Healthy Behaviors in the College Campuses
Authors: Michelle Eichinger, Upali Nanda
Abstract:
Behavior choices during college years can establish the pattern of lifelong healthy living. Nearly 1/3rd of American college students are either overweight (25 < BMI < 30) or obese (BMI > 30). In addition, overweight/obesity contributes to depression, which is a rising epidemic among college students, affecting academic performance and college drop-out rates. Overweight and obesity result in an imbalance of energy consumption (diet) and energy expenditure (physical activity). Overweight/obesity is a significant contributor to heart disease, diabetes, stroke, physical disabilities and some cancers, which are the leading causes of death and disease in the US. There has been a significant increase in obesity and obesity-related disorders such as type 2 diabetes, hypertension, and dyslipidemia among people in their teens and 20s. Historically, the evidence-based interventions for obesity prevention focused on changing the health behavior at the individual level and aimed at increasing awareness and educating people about nutrition and physical activity. However, it became evident that the environmental context of where people live, work and learn was interdependent to healthy behavior change. As a result, a comprehensive approach was required to include altering the social and built environment to support healthy living. College campus provides opportunities to support lifestyle behavior and form a health-promoting culture based on some key point of decisions such as stairs/ elevator, walk/ bike/ car, high-caloric and fast foods/balanced and nutrient-rich foods etc. At each point of decision, design, can help/hinder the healthier choice. For example, stair well design and motivational signage support physical activity; grocery store/market proximity influence healthy eating etc. There is a need to collate the vast information that is in planning and public health domains on a range of successful point of decision prompts, and translate it into architectural guidelines that help define the edge condition for critical point of decision prompts. This research study aims to address healthy behaviors through the built environment with the questions, how can we make the healthy choice an easy choice through the design of critical point of decision prompts? Our hypothesis is that well-designed point of decision prompts in the built environment of college campuses can promote healthier choices by students, which can directly impact mental and physical health related to obesity. This presentation will introduce a combined health and architectural framework aimed to influence healthy behaviors through design applied for college campuses. The premise behind developing our concept, point-of-decision design (PODD), is healthy decision-making can be built into, or afforded by our physical environments. Using effective design intervention strategies at these 'points-of-decision' on college campuses to make the healthy decision the default decision can be instrumental in positively impacting health at the population level. With our model, we aim to advance health research by utilizing point-of-decision design to impact student health via core sectors of influences within college settings, such as campus facilities and transportation. We will demonstrate how these domains influence patterns/trends in healthy eating and active living behaviors among students. how these domains influence patterns/trends in healthy eating and active living behaviors among students.Keywords: architecture and health promotion, college campus, design strategies, health in built environment
Procedia PDF Downloads 22457 Pulmonary Disease Identification Using Machine Learning and Deep Learning Techniques
Authors: Chandu Rathnayake, Isuri Anuradha
Abstract:
Early detection and accurate diagnosis of lung diseases play a crucial role in improving patient prognosis. However, conventional diagnostic methods heavily rely on subjective symptom assessments and medical imaging, often causing delays in diagnosis and treatment. To overcome this challenge, we propose a novel lung disease prediction system that integrates patient symptoms and X-ray images to provide a comprehensive and reliable diagnosis.In this project, develop a mobile application specifically designed for detecting lung diseases. Our application leverages both patient symptoms and X-ray images to facilitate diagnosis. By combining these two sources of information, our application delivers a more accurate and comprehensive assessment of the patient's condition, minimizing the risk of misdiagnosis. Our primary aim is to create a user-friendly and accessible tool, particularly important given the current circumstances where many patients face limitations in visiting healthcare facilities. To achieve this, we employ several state-of-the-art algorithms. Firstly, the Decision Tree algorithm is utilized for efficient symptom-based classification. It analyzes patient symptoms and creates a tree-like model to predict the presence of specific lung diseases. Secondly, we employ the Random Forest algorithm, which enhances predictive power by aggregating multiple decision trees. This ensemble technique improves the accuracy and robustness of the diagnosis. Furthermore, we incorporate a deep learning model using Convolutional Neural Network (CNN) with the RestNet50 pre-trained model. CNNs are well-suited for image analysis and feature extraction. By training CNN on a large dataset of X-ray images, it learns to identify patterns and features indicative of lung diseases. The RestNet50 architecture, known for its excellent performance in image recognition tasks, enhances the efficiency and accuracy of our deep learning model. By combining the outputs of the decision tree-based algorithms and the deep learning model, our mobile application generates a comprehensive lung disease prediction. The application provides users with an intuitive interface to input their symptoms and upload X-ray images for analysis. The prediction generated by the system offers valuable insights into the likelihood of various lung diseases, enabling individuals to take appropriate actions and seek timely medical attention. Our proposed mobile application has significant potential to address the rising prevalence of lung diseases, particularly among young individuals with smoking addictions. By providing a quick and user-friendly approach to assessing lung health, our application empowers individuals to monitor their well-being conveniently. This solution also offers immense value in the context of limited access to healthcare facilities, enabling timely detection and intervention. In conclusion, our research presents a comprehensive lung disease prediction system that combines patient symptoms and X-ray images using advanced algorithms. By developing a mobile application, we provide an accessible tool for individuals to assess their lung health conveniently. This solution has the potential to make a significant impact on the early detection and management of lung diseases, benefiting both patients and healthcare providers.Keywords: CNN, random forest, decision tree, machine learning, deep learning
Procedia PDF Downloads 7456 Cloud-Based Multiresolution Geodata Cube for Efficient Raster Data Visualization and Analysis
Authors: Lassi Lehto, Jaakko Kahkonen, Juha Oksanen, Tapani Sarjakoski
Abstract:
The use of raster-formatted data sets in geospatial analysis is increasing rapidly. At the same time, geographic data are being introduced into disciplines outside the traditional domain of geoinformatics, like climate change, intelligent transport, and immigration studies. These developments call for better methods to deliver raster geodata in an efficient and easy-to-use manner. Data cube technologies have traditionally been used in the geospatial domain for managing Earth Observation data sets that have strict requirements for effective handling of time series. The same approach and methodologies can also be applied in managing other types of geospatial data sets. A cloud service-based geodata cube, called GeoCubes Finland, has been developed to support online delivery and analysis of most important geospatial data sets with national coverage. The main target group of the service is the academic research institutes in the country. The most significant aspects of the GeoCubes data repository include the use of multiple resolution levels, cloud-optimized file structure, and a customized, flexible content access API. Input data sets are pre-processed while being ingested into the repository to bring them into a harmonized form in aspects like georeferencing, sampling resolutions, spatial subdivision, and value encoding. All the resolution levels are created using an appropriate generalization method, selected depending on the nature of the source data set. Multiple pre-processed resolutions enable new kinds of online analysis approaches to be introduced. Analysis processes based on interactive visual exploration can be effectively carried out, as the level of resolution most close to the visual scale can always be used. In the same way, statistical analysis can be carried out on resolution levels that best reflect the scale of the phenomenon being studied. Access times remain close to constant, independent of the scale applied in the application. The cloud service-based approach, applied in the GeoCubes Finland repository, enables analysis operations to be performed on the server platform, thus making high-performance computing facilities easily accessible. The developed GeoCubes API supports this kind of approach for online analysis. The use of cloud-optimized file structures in data storage enables the fast extraction of subareas. The access API allows for the use of vector-formatted administrative areas and user-defined polygons as definitions of subareas for data retrieval. Administrative areas of the country in four levels are available readily from the GeoCubes platform. In addition to direct delivery of raster data, the service also supports the so-called virtual file format, in which only a small text file is first downloaded. The text file contains links to the raster content on the service platform. The actual raster data is downloaded on demand, from the spatial area and resolution level required in each stage of the application. By the geodata cube approach, pre-harmonized geospatial data sets are made accessible to new categories of inexperienced users in an easy-to-use manner. At the same time, the multiresolution nature of the GeoCubes repository facilitates expert users to introduce new kinds of interactive online analysis operations.Keywords: cloud service, geodata cube, multiresolution, raster geodata
Procedia PDF Downloads 13955 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach
Authors: Utkarsh A. Mishra, Ankit Bansal
Abstract:
At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks
Procedia PDF Downloads 225