Search results for: handwritten signature verification
20 Development of Knowledge Discovery Based Interactive Decision Support System on Web Platform for Maternal and Child Health System Strengthening
Authors: Partha Saha, Uttam Kumar Banerjee
Abstract:
Maternal and Child Healthcare (MCH) has always been regarded as one of the important issues globally. Reduction of maternal and child mortality rates and increase of healthcare service coverage were declared as one of the targets in Millennium Development Goals till 2015 and thereafter as an important component of the Sustainable Development Goals. Over the last decade, worldwide MCH indicators have improved but could not match the expected levels. Progress of both maternal and child mortality rates have been monitored by several researchers. Each of the studies has stated that only less than 26% of low-income and middle income countries (LMICs) were on track to achieve targets as prescribed by MDG4. Average worldwide annual rate of reduction of under-five mortality rate and maternal mortality rate were 2.2% and 1.9% as on 2011 respectively whereas rates should be minimum 4.4% and 5.5% annually to achieve targets. In spite of having proven healthcare interventions for both mothers and children, those could not be scaled up to the required volume due to fragmented health systems, especially in the developing and under-developed countries. In this research, a knowledge discovery based interactive Decision Support System (DSS) has been developed on web platform which would assist healthcare policy makers to develop evidence-based policies. To achieve desirable results in MCH, efficient resource planning is very much required. In maximum LMICs, resources are big constraint. Knowledge, generated through this system, would help healthcare managers to develop strategic resource planning for combatting with issues like huge inequity and less coverage in MCH. This system would help healthcare managers to accomplish following four tasks. Those are a) comprehending region wise conditions of variables related with MCH, b) identifying relationships within variables, c) segmenting regions based on variables status, and d) finding out segment wise key influential variables which have major impact on healthcare indicators. Whole system development process has been divided into three phases. Those were i) identifying contemporary issues related with MCH services and policy making; ii) development of the system; and iii) verification and validation of the system. More than 90 variables under three categories, such as a) educational, social, and economic parameters; b) MCH interventions; and c) health system building blocks have been included into this web-based DSS and five separate modules have been developed under the system. First module has been designed for analysing current healthcare scenario. Second module would help healthcare managers to understand correlations among variables. Third module would reveal frequently-occurring incidents along with different MCH interventions. Fourth module would segment regions based on previously mentioned three categories and in fifth module, segment-wise key influential interventions will be identified. India has been considered as case study area in this research. Data of 601 districts of India has been used for inspecting effectiveness of those developed modules. This system has been developed by importing different statistical and data mining techniques on Web platform. Policy makers would be able to generate different scenarios from the system before drawing any inference, aided by its interactive capability.Keywords: maternal and child heathcare, decision support systems, data mining techniques, low and middle income countries
Procedia PDF Downloads 25919 Social Vulnerability Mapping in New York City to Discuss Current Adaptation Practice
Authors: Diana Reckien
Abstract:
Vulnerability assessments are increasingly used to support policy-making in complex environments, like urban areas. Usually, vulnerability studies include the construction of aggregate (sub-) indices and the subsequent mapping of indices across an area of interest. Vulnerability studies show a couple of advantages: they are great communication tools, can inform a wider general debate about environmental issues, and can help allocating and efficiently targeting scarce resources for adaptation policy and planning. However, they also have a number of challenges: Vulnerability assessments are constructed on the basis of a wide range of methodologies and there is no single framework or methodology that has proven to serve best in certain environments, indicators vary highly according to the spatial scale used, different variables and metrics produce different results, and aggregate or composite vulnerability indicators that are mapped easily distort or bias the picture of vulnerability as they hide the underlying causes of vulnerability and level out conflicting reasons of vulnerability in space. So, there is urgent need to further develop the methodology of vulnerability studies towards a common framework, which is one reason of the paper. We introduce a social vulnerability approach, which is compared with other approaches of bio-physical or sectoral vulnerability studies relatively developed in terms of a common methodology for index construction, guidelines for mapping, assessment of sensitivity, and verification of variables. Two approaches are commonly pursued in the literature. The first one is an additive approach, in which all potentially influential variables are weighted according to their importance for the vulnerability aspect, and then added to form a composite vulnerability index per unit area. The second approach includes variable reduction, mostly Principal Component Analysis (PCA) that reduces the number of variables that are interrelated into a smaller number of less correlating components, which are also added to form a composite index. We test these two approaches of constructing indices on the area of New York City as well as two different metrics of variables used as input and compare the outcome for the 5 boroughs of NY. Our analysis yields that the mapping exercise yields particularly different results in the outer regions and parts of the boroughs, such as Outer Queens and Staten Island. However, some of these parts, particularly the coastal areas receive the highest attention in the current adaptation policy. We imply from this that the current adaptation policy and practice in NY might need to be discussed, as these outer urban areas show relatively low social vulnerability as compared with the more central parts, i.e. the high dense areas of Manhattan, Central Brooklyn, Central Queens and the Southern Bronx. The inner urban parts receive lesser adaptation attention, but bear a higher risk of damage in case of hazards in those areas. This is conceivable, e.g., during large heatwaves, which would more affect more the inner and poorer parts of the city as compared with the outer urban areas. In light of the recent planning practice of NY one needs to question and discuss who in NY makes adaptation policy for whom, but the presented analyses points towards an under representation of the needs of the socially vulnerable population, such as the poor, the elderly, and ethnic minorities, in the current adaptation practice in New York City.Keywords: vulnerability mapping, social vulnerability, additive approach, Principal Component Analysis (PCA), New York City, United States, adaptation, social sensitivity
Procedia PDF Downloads 39518 Embedded Test Framework: A Solution Accelerator for Embedded Hardware Testing
Authors: Arjun Kumar Rath, Titus Dhanasingh
Abstract:
Embedded product development requires software to test hardware functionality during development and finding issues during manufacturing in larger quantities. As the components are getting integrated, the devices are tested for their full functionality using advanced software tools. Benchmarking tools are used to measure and compare the performance of product features. At present, these tests are based on a variety of methods involving varying hardware and software platforms. Typically, these tests are custom built for every product and remain unusable for other variants. A majority of the tests goes undocumented, not updated, unusable when the product is released. To bridge this gap, a solution accelerator in the form of a framework can address these issues for running all these tests from one place, using an off-the-shelf tests library in a continuous integration environment. There are many open-source test frameworks or tools (fuego. LAVA, AutoTest, KernelCI, etc.) designed for testing embedded system devices, with each one having several unique good features, but one single tool and framework may not satisfy all of the testing needs for embedded systems, thus an extensible framework with the multitude of tools. Embedded product testing includes board bring-up testing, test during manufacturing, firmware testing, application testing, and assembly testing. Traditional test methods include developing test libraries and support components for every new hardware platform that belongs to the same domain with identical hardware architecture. This approach will have drawbacks like non-reusability where platform-specific libraries cannot be reused, need to maintain source infrastructure for individual hardware platforms, and most importantly, time is taken to re-develop test cases for new hardware platforms. These limitations create challenges like environment set up for testing, scalability, and maintenance. A desirable strategy is certainly one that is focused on maximizing reusability, continuous integration, and leveraging artifacts across the complete development cycle during phases of testing and across family of products. To get over the stated challenges with the conventional method and offers benefits of embedded testing, an embedded test framework (ETF), a solution accelerator, is designed, which can be deployed in embedded system-related products with minimal customizations and maintenance to accelerate the hardware testing. Embedded test framework supports testing different hardwares including microprocessor and microcontroller. It offers benefits such as (1) Time-to-Market: Accelerates board brings up time with prepacked test suites supporting all necessary peripherals which can speed up the design and development stage(board bring up, manufacturing and device driver) (2) Reusability-framework components isolated from the platform-specific HW initialization and configuration makes the adaptability of test cases across various platform quick and simple (3) Effective build and test infrastructure with multiple test interface options and preintegrated with FUEGO framework (4) Continuos integration - pre-integrated with Jenkins which enabled continuous testing and automated software update feature. Applying the embedded test framework accelerator throughout the design and development phase enables to development of the well-tested systems before functional verification and improves time to market to a large extent.Keywords: board diagnostics software, embedded system, hardware testing, test frameworks
Procedia PDF Downloads 14717 LncRNA-miRNA-mRNA Networks Associated with BCR-ABL T315I Mutation in Chronic Myeloid Leukemia
Authors: Adenike Adesanya, Nonthaphat Wong, Xiang-Yun Lan, Shea Ping Yip, Chien-Ling Huang
Abstract:
Background: The most challenging mutation of the oncokinase BCR-ABL protein T315I, which is commonly known as the “gatekeeper” mutation and is notorious for its strong resistance to almost all tyrosine kinase inhibitors (TKIs), especially imatinib. Therefore, this study aims to identify T315I-dependent downstream microRNA (miRNA) pathways associated with drug resistance in chronic myeloid leukemia (CML) for prognostic and therapeutic purposes. Methods: T315I-carrying K562 cell clones (K562-T315I) were generated by the CRISPR-Cas9 system. Imatinib-treated K562-T315I cells were subjected to small RNA library preparation and next-generation sequencing. Putative lncRNA-miRNA-mRNA networks were analyzed with (i) DESeq2 to extract differentially expressed miRNAs, using Padj value of 0.05 as cut-off, (ii) STarMir to obtain potential miRNA response element (MRE) binding sites of selected miRNAs on lncRNA H19, (iii) miRDB, miRTarbase, and TargetScan to predict mRNA targets of selected miRNAs, (iv) IntaRNA to obtain putative interactions between H19 and the predicted mRNAs, (v) Cytoscape to visualize putative networks, and (vi) several pathway analysis platforms – Enrichr, PANTHER and ShinyGO for pathway enrichment analysis. Moreover, mitochondria isolation and transcript quantification were adopted to determine the new mechanism involved in T315I-mediated resistance of CML treatment. Results: Verification of the CRISPR-mediated mutagenesis with digital droplet PCR detected the mutation abundance of ≥80%. Further validation showed the viability of ≥90% by cell viability assay, and intense phosphorylated CRKL protein band being detected with no observable change for BCR-ABL and c-ABL protein expressions by Western blot. As reported by several investigations into hematological malignancies, we determined a 7-fold increase of H19 expression in K562-T315I cells. After imatinib treatment, a 9-fold increment was observed. DESeq2 revealed 171 miRNAs were differentially expressed K562-T315I, 112 out of these miRNAs were identified to have MRE binding regions on H19, and 26 out of the 112 miRNAs were significantly downregulated. Adopting the seed-sequence analysis of these identified miRNAs, we obtained 167 mRNAs. 6 hub miRNAs (hsa-let-7b-5p, hsa-let-7e-5p, hsa-miR-125a-5p, hsa-miR-129-5p, and hsa-miR-372-3p) and 25 predicted genes were identified after constructing hub miRNA-target gene network. These targets demonstrated putative interactions with H19 lncRNA and were mostly enriched in pathways related to cell proliferation, senescence, gene silencing, and pluripotency of stem cells. Further experimental findings have also shown the up-regulation of mitochondrial transcript and lncRNA MALAT1 contributing to the lncRNA-miRNA-mRNA networks induced by BCR-ABL T315I mutation. Conclusions: Our results have indicated that lncRNA-miRNA regulators play a crucial role not only in leukemogenesis but also in drug resistance, considering the significant dysregulation and interactions in the K562-T315I cell model generated by CRISPR-Cas9. In silico analysis has further shown that lncRNAs H19 and MALAT1 bear several complementary miRNA sites. This implies that they could serve as a sponge, hence sequestering the activity of the target miRNAs.Keywords: chronic myeloid leukemia, imatinib resistance, lncRNA-miRNA-mRNA, T315I mutation
Procedia PDF Downloads 16016 Application of the Pattern Method to Form the Stable Neural Structures in the Learning Process as a Way of Solving Modern Problems in Education
Authors: Liudmyla Vesper
Abstract:
The problems of modern education are large-scale and diverse. The aspirations of parents, teachers, and experts converge - everyone interested in growing up a generation of whole, well-educated persons. Both the family and society are expected in the future generation to be self-sufficient, desirable in the labor market, and capable of lifelong learning. Today's children have a powerful potential that is difficult to realize in the conditions of traditional school approaches. Focusing on STEM education in practice often ends with the simple use of computers and gadgets during class. "Science", "technology", "engineering" and "mathematics" are difficult to combine within school and university curricula, which have not changed much during the last 10 years. Solving the problems of modern education largely depends on teachers - innovators, teachers - practitioners who develop and implement effective educational methods and programs. Teachers who propose innovative pedagogical practices that allow students to master large-scale knowledge and apply it to the practical plane. Effective education considers the creation of stable neural structures during the learning process, which allow to preserve and increase knowledge throughout life. The author proposed a method of integrated lessons – cases based on the maths patterns for forming a holistic perception of the world. This method and program are scientifically substantiated and have more than 15 years of practical application experience in school and student classrooms. The first results of the practical application of the author's methodology and curriculum were announced at the International Conference "Teaching and Learning Strategies to Promote Elementary School Success", 2006, April 22-23, Yerevan, Armenia, IREX-administered 2004-2006 Multiple Component Education Project. This program is based on the concept of interdisciplinary connections and its implementation in the process of continuous learning. This allows students to save and increase knowledge throughout life according to a single pattern. The pattern principle stores information on different subjects according to one scheme (pattern), using long-term memory. This is how neural structures are created. The author also admits that a similar method can be successfully applied to the training of artificial intelligence neural networks. However, this assumption requires further research and verification. The educational method and program proposed by the author meet the modern requirements for education, which involves mastering various areas of knowledge, starting from an early age. This approach makes it possible to involve the child's cognitive potential as much as possible and direct it to the preservation and development of individual talents. According to the methodology, at the early stages of learning students understand the connection between school subjects (so-called "sciences" and "humanities") and in real life, apply the knowledge gained in practice. This approach allows students to realize their natural creative abilities and talents, which makes it easier to navigate professional choices and find their place in life.Keywords: science education, maths education, AI, neuroplasticity, innovative education problem, creativity development, modern education problem
Procedia PDF Downloads 6315 A Review on Cyberchondria Based on Bibliometric Analysis
Authors: Xiaoqing Peng, Aijing Luo, Yang Chen
Abstract:
Background: Cyberchondria, as an "emerging risk" accompanied by the information era, is a new abnormal pattern characterized by excessive or repeated online searches for health-related information and escalating health anxiety, which endangers people's physical and mental health and poses a huge threat to public health. Objective: To explore and discuss the research status, hotspots and trends of Cyberchondria. Methods: Based on a total of 77 articles regarding "Cyberchondria" extracted from Web of Science from the beginning till October 2019, the literature trends, countries, institutions, hotspots are analyzed by bibliometric analysis, the concept definition of Cyberchondria, instruments, relevant factors, treatment and intervention are discussed as well. Results: Since "Cyberchondria" was put forward for the first time in 2001, the last two decades witnessed a noticeable increase in the amount of literature, especially during 2014-2019, it quadrupled dramatically at 62 compared with that before 2014 only at 15, which shows that Cyberchondria has become a new theme and hot topic in recent years. The United States was the most active contributor with the largest publication (23), followed by England (11) and Australia (11), while the leading institutions were Baylor University(7) and University of Sydney(7), followed by Florida State University(4) and University of Manchester(4). The WoS categories "Psychiatry/Psychology " and "Computer/ Information Science "were the areas of greatest influence. The concept definition of Cyberchondria is not completely unified in the world, but it is generally considered as an abnormal behavioral pattern and emotional state and has been invoked to refer to the anxiety-amplifying effects of online health-related searches. The first and the most frequently cited scale for measuring the severity of Cyberchondria called “The Cyberchondria Severity Scale (CSS) ”was developed in 2014, which conceptualized Cyberchondria as a multidimensional construct consisting of compulsion, distress, excessiveness, reassurance, and mistrust of medical professionals which was proved to be not necessary for this construct later. Since then, the Brazilian, German, Turkish, Polish and Chinese versions were subsequently developed, improved and culturally adjusted, while CSS was optimized to a simplified version (CSS-12) in 2019, all of which should be worthy of further verification. The hotspots of Cyberchondria mainly focuses on relevant factors as follows: intolerance of uncertainty, anxiety sensitivity, obsessive-compulsive disorder, internet addition, abnormal illness behavior, Whiteley index, problematic internet use, trying to make clear the role played by “associated factors” and “anxiety-amplifying factors” in the development of Cyberchondria, to better understand the aetiological links and pathways in the relationships between hypochondriasis, health anxiety and online health-related searches. Although the treatment and intervention of Cyberchondria are still in the initial stage of exploration, there are kinds of meaningful attempts to seek effective strategies from different aspects such as online psychological treatment, network technology management, health information literacy improvement and public health service. Conclusion: Research on Cyberchondria is in its infancy but should be deserved more attention. A conceptual consensus on Cyberchondria, a refined assessment tool, prospective studies conducted in various populations, targeted treatments for it would be the main research direction in the near future.Keywords: cyberchondria, hypochondriasis, health anxiety, online health-related searches
Procedia PDF Downloads 12414 Raman Spectral Fingerprints of Healthy and Cancerous Human Colorectal Tissues
Authors: Maria Karnachoriti, Ellas Spyratou, Dimitrios Lykidis, Maria Lambropoulou, Yiannis S. Raptis, Ioannis Seimenis, Efstathios P. Efstathopoulos, Athanassios G. Kontos
Abstract:
Colorectal cancer is the third most common cancer diagnosed in Europe, according to the latest incidence data provided by the World Health Organization (WHO), and early diagnosis has proved to be the key in reducing cancer-related mortality. In cases where surgical interventions are required for cancer treatment, the accurate discrimination between healthy and cancerous tissues is critical for the postoperative care of the patient. The current study focuses on the ex vivo handling of surgically excised colorectal specimens and the acquisition of their spectral fingerprints using Raman spectroscopy. Acquired data were analyzed in an effort to discriminate, in microscopic scale, between healthy and malignant margins. Raman spectroscopy is a spectroscopic technique with high detection sensitivity and spatial resolution of few micrometers. The spectral fingerprint which is produced during laser-tissue interaction is unique and characterizes the biostructure and its inflammatory or cancer state. Numerous published studies have demonstrated the potential of the technique as a tool for the discrimination between healthy and malignant tissues/cells either ex vivo or in vivo. However, the handling of the excised human specimens and the Raman measurement conditions remain challenging, unavoidably affecting measurement reliability and repeatability, as well as the technique’s overall accuracy and sensitivity. Therefore, tissue handling has to be optimized and standardized to ensure preservation of cell integrity and hydration level. Various strategies have been implemented in the past, including the use of balanced salt solutions, small humidifiers or pump-reservoir-pipette systems. In the current study, human colorectal specimens of 10X5 mm were collected from 5 patients up to now who underwent open surgery for colorectal cancer. A novel, non-toxic zinc-based fixative (Z7) was used for tissue preservation. Z7 demonstrates excellent protein preservation and protection against tissue autolysis. Micro-Raman spectra were recorded with a Renishaw Invia spectrometer from successive random 2 micrometers spots upon excitation at 785 nm to decrease fluorescent background and secure avoidance of tissue photodegradation. A temperature-controlled approach was adopted to stabilize the tissue at 2 °C, thus minimizing dehydration effects and consequent focus drift during measurement. A broad spectral range, 500-3200 cm-1,was covered with five consecutive full scans that lasted for 20 minutes in total. The average spectra were used for least square fitting analysis of the Raman modes.Subtle Raman differences were observed between normal and cancerous colorectal tissues mainly in the intensities of the 1556 cm-1 and 1628 cm-1 Raman modes which correspond to v(C=C) vibrations in porphyrins, as well as in the range of 2800-3000 cm-1 due to CH2 stretching of lipids and CH3 stretching of proteins. Raman spectra evaluation was supported by histological findings from twin specimens. This study demonstrates that Raman spectroscopy may constitute a promising tool for real-time verification of clear margins in colorectal cancer open surgery.Keywords: colorectal cancer, Raman spectroscopy, malignant margins, spectral fingerprints
Procedia PDF Downloads 9213 Impact of Air Pressure and Outlet Temperature on Physicochemical and Functional Properties of Spray-dried Skim Milk Powder
Authors: Adeline Meriaux, Claire Gaiani, Jennifer Burgain, Frantz Fournier, Lionel Muniglia, Jérémy Petit
Abstract:
Spray-drying process is widely used for the production of dairy powders for food and pharmaceuticals industries. It involves the atomization of a liquid feed into fine droplets, which are subsequently dried through contact with a hot air flow. The resulting powders permit transportation cost reduction and shelf life increase but can also exhibit various interesting functionalities (flowability, solubility, protein modification or acid gelation), depending on operating conditions and milk composition. Indeed, particles porosity, surface composition, lactose crystallization, protein denaturation, protein association or crust formation may change. Links between spray-drying conditions and physicochemical and functional properties of powders were investigated by a design of experiment methodology and analyzed by principal component analysis. Quadratic models were developed, and multicriteria optimization was carried out by the use of genetic algorithm. At the time of abstract submission, verification spray-drying trials are ongoing. To perform experiments, milk from dairy farm was collected, skimmed, froze and spray-dried at different air pressure (between 1 and 3 bars) and outlet temperature (between 75 and 95 °C). Dry matter, minerals content and proteins content were determined by standard method. Solubility index, absorption index and hygroscopicity were determined by method found in literature. Particle size distribution were obtained by laser diffraction granulometry. Location of the powder color in the Cielab color space and water activity were characterized by a colorimeter and an aw-value meter, respectively. Flow properties were characterized with FT4 powder rheometer; in particular compressibility and shearing test were performed. Air pressure and outlet temperature are key factors that directly impact the drying kinetics and powder characteristics during spray-drying process. It was shown that the air pressure affects the particle size distribution by impacting the size of droplet exiting the nozzle. Moreover, small particles lead to more cohesive powder and less saturated color of powders. Higher outlet temperature results in lower moisture level particles which are less sticky and can explain a spray-drying yield increase and the higher cohesiveness; it also leads to particle with low water activity because of the intense evaporation rate. However, it induces a high hygroscopicity, thus, powders tend to get wet rapidly if they are not well stored. On the other hand, high temperature provokes a decrease of native serum proteins which is positively correlated to gelation properties (gel point and firmness). Partial denaturation of serum proteins can improve functional properties of powder. The control of air pressure and outlet temperature during the spray-drying process significantly affects the physicochemical and functional properties of powder. This study permitted to better understand the links between physicochemical and functional properties of powder, to identify correlations between air pressure and outlet temperature. Therefore, mathematical models have been developed and the use of genetic algorithm will allow the optimization of powder functionalities.Keywords: dairy powders, spray-drying, powders functionalities, design of experiment
Procedia PDF Downloads 9312 Study of Operating Conditions Impact on Physicochemical and Functional Properties of Dairy Powder Produced by Spray-drying
Authors: Adeline Meriaux, Claire Gaiani, Jennifer Burgain, Frantz Fournier, Lionel Muniglia, Jérémy Petit
Abstract:
Spray-drying process is widely used for the production of dairy powders for food and pharmaceuticals industries. It involves the atomization of a liquid feed into fine droplets, which are subsequently dried through contact with a hot air flow. The resulting powders permit transportation cost reduction and shelf life increase but can also exhibit various interesting functionalities (flowability, solubility, protein modification or acid gelation), depending on operating conditions and milk composition. Indeed, particles porosity, surface composition, lactose crystallization, protein denaturation, protein association or crust formation may change. Links between spray-drying conditions and physicochemical and functional properties of powders were investigated by a design of experiment methodology and analyzed by principal component analysis. Quadratic models were developed, and multicriteria optimization was carried out by the use of genetic algorithm. At the time of abstract submission, verification spray-drying trials are ongoing. To perform experiments, milk from dairy farm was collected, skimmed, froze and spray-dried at different air pressure (between 1 and 3 bars) and outlet temperature (between 75 and 95 °C). Dry matter, minerals content and proteins content were determined by standard method. Solubility index, absorption index and hygroscopicity were determined by method found in literature. Particle size distribution were obtained by laser diffraction granulometry. Location of the powder color in the Cielab color space and water activity were characterized by a colorimeter and an aw-value meter, respectively. Flow properties were characterized with FT4 powder rheometer; in particular, compressibility and shearing test were performed. Air pressure and outlet temperature are key factors that directly impact the drying kinetics and powder characteristics during spray-drying process. It was shown that the air pressure affects the particle size distribution by impacting the size of droplet exiting the nozzle. Moreover, small particles lead to more cohesive powder and less saturated color of powders. Higher outlet temperature results in lower moisture level particles which are less sticky and can explain a spray-drying yield increase and the higher cohesiveness; it also leads to particle with low water activity because of the intense evaporation rate. However, it induces a high hygroscopicity, thus, powders tend to get wet rapidly if they are not well stored. On the other hand, high temperature provokes a decrease of native serum proteins, which is positively correlated to gelation properties (gel point and firmness). Partial denaturation of serum proteins can improve functional properties of powder. The control of air pressure and outlet temperature during the spray-drying process significantly affects the physicochemical and functional properties of powder. This study permitted to better understand the links between physicochemical and functional properties of powder to identify correlations between air pressure and outlet temperature. Therefore, mathematical models have been developed, and the use of genetic algorithm will allow the optimization of powder functionalities.Keywords: dairy powders, spray-drying, powders functionalities, design of experiment
Procedia PDF Downloads 6511 Analytical Model of Locomotion of a Thin-Film Piezoelectric 2D Soft Robot Including Gravity Effects
Authors: Zhiwu Zheng, Prakhar Kumar, Sigurd Wagner, Naveen Verma, James C. Sturm
Abstract:
Soft robots have drawn great interest recently due to a rich range of possible shapes and motions they can take on to address new applications, compared to traditional rigid robots. Large-area electronics (LAE) provides a unique platform for creating soft robots by leveraging thin-film technology to enable the integration of a large number of actuators, sensors, and control circuits on flexible sheets. However, the rich shapes and motions possible, especially when interacting with complex environments, pose significant challenges to forming well-generalized and robust models necessary for robot design and control. In this work, we describe an analytical model for predicting the shape and locomotion of a flexible (steel-foil-based) piezoelectric-actuated 2D robot based on Euler-Bernoulli beam theory. It is nominally (unpowered) lying flat on the ground, and when powered, its shape is controlled by an array of piezoelectric thin-film actuators. Key features of the models are its ability to incorporate the significant effects of gravity on the shape and to precisely predict the spatial distribution of friction against the contacting surfaces, necessary for determining inchworm-type motion. We verified the model by developing a distributed discrete element representation of a continuous piezoelectric actuator and by comparing its analytical predictions to discrete-element robot simulations using PyBullet. Without gravity, predicting the shape of a sheet with a linear array of piezoelectric actuators at arbitrary voltages is straightforward. However, gravity significantly distorts the shape of the sheet, causing some segments to flatten against the ground. Our work includes the following contributions: (i) A self-consistent approach was developed to exactly determine which parts of the soft robot are lifted off the ground, and the exact shape of these sections, for an arbitrary array of piezoelectric voltages and configurations. (ii) Inchworm-type motion relies on controlling the relative friction with the ground surface in different sections of the robot. By adding torque-balance to our model and analyzing shear forces, the model can then determine the exact spatial distribution of the vertical force that the ground is exerting on the soft robot. Through this, the spatial distribution of friction forces between ground and robot can be determined. (iii) By combining this spatial friction distribution with the shape of the soft robot, in the function of time as piezoelectric actuator voltages are changed, the inchworm-type locomotion of the robot can be determined. As a practical example, we calculated the performance of a 5-actuator system on a 50-µm thick steel foil. Piezoelectric properties of commercially available thin-film piezoelectric actuators were assumed. The model predicted inchworm motion of up to 200 µm per step. For independent verification, we also modelled the system using PyBullet, a discrete-element robot simulator. To model a continuous thin-film piezoelectric actuator, we broke each actuator into multiple segments, each of which consisted of two rigid arms with appropriate mass connected with a 'motor' whose torque was set by the applied actuator voltage. Excellent agreement between our analytical model and the discrete-element simulator was shown for both for the full deformation shape and motion of the robot.Keywords: analytical modeling, piezoelectric actuators, soft robot locomotion, thin-film technology
Procedia PDF Downloads 18110 Improvements and Implementation Solutions to Reduce the Computational Load for Traffic Situational Awareness with Alerts (TSAA)
Authors: Salvatore Luongo, Carlo Luongo
Abstract:
This paper discusses the implementation solutions to reduce the computational load for the Traffic Situational Awareness with Alerts (TSAA) application, based on Automatic Dependent Surveillance-Broadcast (ADS-B) technology. In 2008, there were 23 total mid-air collisions involving general aviation fixed-wing aircraft, 6 of which were fatal leading to 21 fatalities. These collisions occurred during visual meteorological conditions, indicating the limitations of the see-and-avoid concept for mid-air collision avoidance as defined in the Federal Aviation Administration’s (FAA). The commercial aviation aircraft are already equipped with collision avoidance system called TCAS, which is based on classic transponder technology. This system dramatically reduced the number of mid-air collisions involving air transport aircraft. In general aviation, the same reduction in mid-air collisions has not occurred, so this reduction is the main objective of the TSAA application. The major difference between the original conflict detection application and the TSAA application is that the conflict detection is focused on preventing loss of separation in en-route environments. Instead TSAA is devoted to reducing the probability of mid-air collision in all phases of flight. The TSAA application increases the flight crew traffic situation awareness providing alerts of traffic that are detected in conflict with ownship in support of the see-and-avoid responsibility. The relevant effort has been spent in the design process and the code generation in order to maximize the efficiency and performances in terms of computational load and memory consumption reduction. The TSAA architecture is divided into two high-level systems: the “Threats database” and the “Conflict detector”. The first one receives the traffic data from ADS-B device and provides the memorization of the target’s data history. Conflict detector module estimates ownship and targets trajectories in order to perform the detection of possible future loss of separation between ownship and each target. Finally, the alerts are verified by additional conflict verification logic, in order to prevent possible undesirable behaviors of the alert flag. In order to reduce the computational load, a pre-check evaluation module is used. This pre-check is only a computational optimization, so the performances of the conflict detector system are not modified in terms of number of alerts detected. The pre-check module uses analytical trajectories propagation for both target and ownship. This allows major accuracy and avoids the step-by-step propagation, which requests major computational load. Furthermore, the pre-check permits to exclude the target that is certainly not a threat, using an analytical and efficient geometrical approach, in order to decrease the computational load for the following modules. This software improvement is not suggested by FAA documents, and so it is the main innovation of this work. The efficiency and efficacy of this enhancement are verified using fast-time and real-time simulations and by the execution on a real device in several FAA scenarios. The final implementation also permits the FAA software certification in compliance with DO-178B standard. The computational load reduction allows the installation of TSAA application also on devices with multiple applications and/or low capacity in terms of available memory and computational capabilitiesKeywords: traffic situation awareness, general aviation, aircraft conflict detection, computational load reduction, implementation solutions, software certification
Procedia PDF Downloads 2869 Study of Objectivity, Reliability and Validity of Pedagogical Diagnostic Parameters Introduced in the Framework of a Specific Research
Authors: Emiliya Tsankova, Genoveva Zlateva, Violeta Kostadinova
Abstract:
The challenges modern education faces undoubtedly require reforms and innovations aimed at the reconceptualization of existing educational strategies, the introduction of new concepts and novel techniques and technologies related to the recasting of the aims of education and the remodeling of the content and methodology of education which would guarantee the streamlining of our education with basic European values. Aim: The aim of the current research is the development of a didactic technology for the assessment of the applicability and efficacy of game techniques in pedagogic practice calibrated to specific content and the age specificity of learners, as well as for evaluating the efficacy of such approaches for the facilitation of the acquisition of biological knowledge at a higher theoretical level. Results: In this research, we examine the objectivity, reliability and validity of two newly introduced diagnostic parameters for assessing the durability of the acquired knowledge. A pedagogic experiment has been carried out targeting the verification of the hypothesis that the introduction of game techniques in biological education leads to an increase in the quantity, quality and durability of the knowledge acquired by students. For the purposes of monitoring the effect of the application of the pedagogical technique employing game methodology on the durability of the acquired knowledge a test-base examination has been applied to students from a control group (CG) and students form an experimental group on the same content after a six-month period. The analysis is based on: 1.A study of the statistical significance of the differences of the tests for the CG and the EG, applied after a six-month period, which however is not indicative of the presence or absence of a marked effect from the applied pedagogic technique in cases when the entry levels of the two groups are different. 2.For a more reliable comparison, independently from the entry level of each group, another “indicator of efficacy of game techniques for the durability of knowledge” which has been used for the assessment of the achievement results and durability of this methodology of education. The monitoring of the studied parameters in their dynamic unfolding in different age groups of learners unquestionably reveals a positive effect of the introduction of game techniques in education in respect of durability and permanence of acquired knowledge. Methods: In the current research the following battery of methods and techniques of research for diagnostics has been employed: theoretical analysis and synthesis; an actual pedagogical experiment; questionnaire; didactic testing and mathematical and statistical methods. The data obtained have been used for the qualitative and quantitative of the results which reflect the efficacy of the applied methodology. Conclusion: The didactic model of the parameters researched in the framework of a specific study of pedagogic diagnostics is based on a general, interdisciplinary approach. Enhanced durability of the acquired knowledge proves the transition of that knowledge from short-term memory storage into long-term memory of pupils and students, which justifies the conclusion that didactic plays have beneficial effects for the betterment of learners’ cognitive skills. The innovations in teaching enhance the motivation, creativity and independent cognitive activity in the process of acquiring the material thought. The innovative methods allow for untraditional means for assessing the level of knowledge acquisition. This makes possible the timely discovery of knowledge gaps and the introduction of compensatory techniques, which in turn leads to deeper and more durable acquisition of knowledge.Keywords: objectivity, reliability and validity of pedagogical diagnostic parameters introduced in the framework of a specific research
Procedia PDF Downloads 3948 TeleEmergency Medicine: Transforming Acute Care through Virtual Technology
Authors: Ashley L. Freeman, Jessica D. Watkins
Abstract:
TeleEmergency Medicine (TeleEM) is an innovative approach leveraging virtual technology to deliver specialized emergency medical care across diverse healthcare settings, including internal acute care and critical access hospitals, remote patient monitoring, and nurse triage escalation, in addition to external emergency departments, skilled nursing facilities, and community health centers. TeleEM represents a significant advancement in the delivery of emergency medical care, providing healthcare professionals the capability to deliver expertise that closely mirrors in-person emergency medicine, exceeding geographical boundaries. Through qualitative research, the extension of timely, high-quality care has proven to address the critical needs of patients in remote and underserved areas. TeleEM’s service design allows for the expansion of existing services and the establishment of new ones in diverse geographic locations. This ensures that healthcare institutions can readily scale and adapt services to evolving community requirements by leveraging on-demand (non-scheduled) telemedicine visits through the deployment of multiple video solutions. In terms of financial management, TeleEM currently employs billing suppression and subscription models to enhance accessibility for a wide range of healthcare facilities. Plans are in motion to transition to a billing system routing charges through a third-party vendor, further enhancing financial management flexibility. To address state licensure concerns, a patient location verification process has been integrated through legal counsel and compliance authorities' guidance. The TeleEM workflow is designed to terminate if the patient is not physically located within licensed regions at the time of the virtual connection, alleviating legal uncertainties. A distinctive and pivotal feature of TeleEM is the introduction of the TeleEmergency Medicine Care Team Assistant (TeleCTA) role. TeleCTAs collaborate closely with TeleEM Physicians, leading to enhanced service activation, streamlined coordination, and workflow and data efficiencies. In the last year, more than 800 TeleEM sessions have been conducted, of which 680 were initiated by internal acute care and critical access hospitals, as evidenced by quantitative research. Without this service, many of these cases would have necessitated patient transfers. Barriers to success were examined through thorough medical record review and data analysis, which identified inaccuracies in documentation leading to activation delays, limitations in billing capabilities, and data distortion, as well as the intricacies of managing varying workflows and device setups. TeleEM represents a transformative advancement in emergency medical care that nurtures collaboration and innovation. Not only has advanced the delivery of emergency medicine care virtual technology through focus group participation with key stakeholders, rigorous attention to legal and financial considerations, and the implementation of robust documentation tools and the TeleCTA role, but it’s also set the stage for overcoming geographic limitations. TeleEM assumes a notable position in the field of telemedicine by enhancing patient outcomes and expanding access to emergency medical care while mitigating licensure risks and ensuring compliant billing.Keywords: emergency medicine, TeleEM, rural healthcare, telemedicine
Procedia PDF Downloads 847 Internet of Assets: A Blockchain-Inspired Academic Program
Authors: Benjamin Arazi
Abstract:
Blockchain is the technology behind cryptocurrencies like Bitcoin. It revolutionizes the meaning of trust in the sense of offering total reliability without relying on any central entity that controls or supervises the system. The Wall Street Journal states: “Blockchain Marks the Next Step in the Internet’s Evolution”. Blockchain was listed as #1 in Linkedin – The Learning Blog “most in-demand hard skills needed in 2020”. As stated there: “Blockchain’s novel way to store, validate, authorize, and move data across the internet has evolved to securely store and send any digital asset”. GSMA, a leading Telco organization of mobile communications operators, declared that “Blockchain has the potential to be for value what the Internet has been for information”. Motivated by these seminal observations, this paper presents the foundations of a Blockchain-based “Internet of Assets” academic program that joins under one roof leading application areas that are characterized by the transfer of assets over communication lines. Two such areas, which are pillars of our economy, are Fintech – Financial Technology and mobile communications services. The next application in line is Healthcare. These challenges are met based on available extensive professional literature. Blockchain-based assets communication is based on extending the principle of Bitcoin, starting with the basic question: If digital money that travels across the universe can ‘prove its own validity’, can this principle be applied to digital content. A groundbreaking positive answer here led to the concept of “smart contract” and consequently to DLT - Distributed Ledger Technology, where the word ‘distributed’ relates to the non-existence of reliable central entities or trusted third parties. The terms Blockchain and DLT are frequently used interchangeably in various application areas. The World Bank Group compiled comprehensive reports, analyzing the contribution of DLT/Blockchain to Fintech. The European Central Bank and Bank of Japan are engaged in Project Stella, “Balancing confidentiality and auditability in a distributed ledger environment”. 130 DLT/Blockchain focused Fintech startups are now operating in Switzerland. Blockchain impact on mobile communications services is treated in detail by leading organizations. The TM Forum is a global industry association in the telecom industry, with over 850 member companies, mainly mobile operators, that generate US$2 trillion in revenue and serve five billion customers across 180 countries. From their perspective: “Blockchain is considered one of the digital economy’s most disruptive technologies”. Samples of Blockchain contributions to Fintech (taken from a World Bank document): Decentralization and disintermediation; Greater transparency and easier auditability; Automation & programmability; Immutability & verifiability; Gains in speed and efficiency; Cost reductions; Enhanced cyber security resilience. Samples of Blockchain contributions to the Telco industry. Establishing identity verification; Record of transactions for easy cost settlement; Automatic triggering of roaming contract which enables near-instantaneous charging and reduction in roaming fraud; Decentralized roaming agreements; Settling accounts per costs incurred in accordance with agreement tariffs. This clearly demonstrates an academic education structure where fundamental technologies are studied in classes together with these two application areas. Advanced courses, treating specific implementations then follow separately. All are under the roof of “Internet of Assets”.Keywords: blockchain, education, financial technology, mobile telecommunications services
Procedia PDF Downloads 1806 Implementation of Green Deal Policies and Targets in Energy System Optimization Models: The TEMOA-Europe Case
Authors: Daniele Lerede, Gianvito Colucci, Matteo Nicoli, Laura Savoldi
Abstract:
The European Green Deal is the first internationally agreed set of measures to contrast climate change and environmental degradation. Besides the main target of reducing emissions by at least 55% by 2030, it sets the target of accompanying European countries through an energy transition to make the European Union into a modern, resource-efficient, and competitive net-zero emissions economy by 2050, decoupling growth from the use of resources and ensuring a fair adaptation of all social categories to the transformation process. While the general purpose to allow the realization of the purposes of the Green Deal already dates back to 2019, strategies and policies keep being developed coping with recent circumstances and achievements. However, general long-term measures like the Circular Economy Action Plan, the proposals to shift from fossil natural gas to renewable and low-carbon gases, in particular biomethane and hydrogen, and to end the sale of gasoline and diesel cars by 2035, will all have significant effects on energy supply and demand evolution across the next decades. The interactions between energy supply and demand over long-term time frames are usually assessed via energy system models to derive useful insights for policymaking and to address technological choices and research and development. TEMOA-Europe is a newly developed energy system optimization model instance based on the minimization of the total cost of the system under analysis, adopting a technologically integrated, detailed, and explicit formulation and considering the evolution of the system in partial equilibrium in competitive markets with perfect foresight. TEMOA-Europe is developed on the TEMOA platform, an open-source modeling framework totally implemented in Python, therefore ensuring third-party verification even on large and complex models. TEMOA-Europe is based on a single-region representation of the European Union and EFTA countries on a time scale between 2005 and 2100, relying on a set of assumptions for socio-economic developments based on projections by the International Energy Outlook and a large technological dataset including 7 sectors: the upstream and power sectors for the production of all energy commodities and the end-use sectors, including industry, transport, residential, commercial and agriculture. TEMOA-Europe also includes an updated hydrogen module considering its production, storage, transportation, and utilization. Besides, it can rely on a wide set of innovative technologies, ranging from nuclear fusion and electricity plants equipped with CCS in the power sector to electrolysis-based steel production processes and steel in the industrial sector – with a techno-economic characterization based on public literature – to produce insightful energy scenarios and especially to cope with the very long analyzed time scale. The aim of this work is to examine in detail the scheme of measures and policies for the realization of the purposes of the Green Deal and to transform them into a set of constraints and new socio-economic development pathways. Based on them, TEMOA-Europe will be used to produce and comparatively analyze scenarios to assess the consequences of Green Deal-related measures on the future evolution of the energy mix over the whole energy system in an economic optimization environment.Keywords: European Green Deal, energy system optimization modeling, scenario analysis, TEMOA-Europe
Procedia PDF Downloads 1065 Burkholderia Cepacia ST 767 Causing a Three Years Nosocomial Outbreak in a Hemodialysis Unit
Authors: Gousilin Leandra Rocha Da Silva, Stéfani T. A. Dantas, Bruna F. Rossi, Erika R. Bonsaglia, Ivana G. Castilho, Terue Sadatsune, Ary Fernandes Júnior, Vera l. M. Rall
Abstract:
Kidney failure causes decreased diuresis and accumulation of nitrogenous substances in the body. To increase patient survival, hemodialysis is used as a partial substitute for renal function. However, contamination of the water used in this treatment, causing bacteremia in patients, is a worldwide concern. The Burkholderia cepacia complex (Bcc), a group of bacteria with more than 20 species, is frequently isolated from hemodialysis water samples and comprises opportunistic bacteria, affecting immunosuppressed patients, due to its wide variety of virulence factors, in addition to innate resistance to several antimicrobial agents, contributing to the permanence in the hospital environment and to the pathogenesis in the host. The objective of the present work was to characterize molecularly and phenotypically Bcc isolates collected from the water and dialysate of the Hemodialysis Unit and from the blood of patients at a Public Hospital in Botucatu, São Paulo, Brazil, between 2019 and 2021. We used 33 Bcc isolates, previously obtained from blood cultures from patients with bacteremia undergoing hemodialysis treatment (2019-2021) and 24 isolates obtained from water and dialysate samples in a Hemodialysis Unit (same period). The recA gene was sequenced to identify the specific species among the Bcc group. All isolates were tested for the presence of some genes that encode virulence factors such as cblA, esmR, zmpA and zmpB. Considering the epidemiology of the outbreak, the Bcc isolates were molecularly characterized by Multi Locus Sequence Type (MLST) and by pulsed-field gel electrophoresis (PFGE). The verification and quantification of biofilm in a polystyrene microplate were performed by submitting the isolates to different incubation temperatures (20°C, average water temperature and 35°C, optimal temperature for group growth). The antibiogram was performed with disc diffusion tests on agar, using discs impregnated with cefepime (30µg), ceftazidime (30µg), ciprofloxacin (5µg), gentamicin (10µg), imipenem (10µg), amikacin 30µg), sulfametazol/trimethoprim (23.75/1.25µg) and ampicillin/sulbactam (10/10µg). The presence of ZmpB was identified in all isolates, while ZmpA was observed in 96.5% of the isolates, while none of them presented the cblA and esmR genes. The antibiogram of the 33 human isolates indicated that all were resistant to gentamicin, colistin, ampicillin/sulbactam and imipenem. 16 (48.5%) isolates were resistant to amikacin and lower rates of resistance were observed for meropenem, ceftazidime, cefepime, ciprofloxacin and piperacycline/tazobactam (6.1%). All isolates were sensitive to sulfametazol/trimethoprim, levofloxacin and tigecycline. As for the water isolates, resistance was observed only to gentamicin (34.8%) and imipenem (17.4%). According to PFGE results, all isolates obtained from humans and water belonged to the same pulsotype (1), which was identified by recA sequencing as B. cepacia¸, belonging to sequence type ST-767. By observing a single pulse type over three years, one can observe the persistence of this isolate in the pipeline, contaminating patients undergoing hemodialysis, despite the routine disinfection of water with peracetic acid. This persistence is probably due to the production of biofilm, which protects bacteria from disinfectants and, making this scenario more critical, several isolates proved to be multidrug-resistant (resistance to at least three groups of antimicrobials), turning the patient care even more difficult.Keywords: hemodialysis, burkholderia cepacia, PFGE, MLST, multi drug resistance
Procedia PDF Downloads 1014 Radioprotective Effects of Super-Paramagnetic Iron Oxide Nanoparticles Used as Magnetic Resonance Imaging Contrast Agent for Magnetic Resonance Imaging-Guided Radiotherapy
Authors: Michael R. Shurin, Galina Shurin, Vladimir A. Kirichenko
Abstract:
Background. Visibility of hepatic malignancies is poor on non-contrast imaging for daily verification of liver malignancies prior to radiation therapy on MRI-guided Linear Accelerators (MR-Linac). Ferumoxytol® (Feraheme, AMAG Pharmaceuticals, Waltham, MA) is a SPION agent that is increasingly utilized off-label as hepatic MRI contrast. This agent has the advantage of providing a functional assessment of the liver based upon its uptake by hepatic Kupffer cells proportionate to vascular perfusion, resulting in strong T1, T2 and T2* relaxation effects and enhanced contrast of malignant tumors, which lack Kupffer cells. The latter characteristic has been recently utilized for MRI-guided radiotherapy planning with precision targeting of liver malignancies. However potential radiotoxicity of SPION has never been addressed for its safe use as an MRI-contrast agent during liver radiotherapy on MRI-Linac. This study defines the radiomodulating properties of SPIONs in vitro on human monocyte and macrophage cell lines exposed to 60Go gamma-rays within clinical radiotherapy dose range. Methods. Human monocyte and macrophages cell line in cultures were loaded with a clinically relevant concentration of Ferumoxytol (30µg/ml) for 2 and 24 h and irradiated to 3Gy, 5Gy and 10Gy. Cells were washed and cultured for additional 24 and 48 h prior to assessing their phenotypic activation by flow cytometry and function, including viability (Annexin V/PI assay), proliferation (MTT assay) and cytokine expression (Luminex assay). Results. Our results reveled that SPION affected both human monocytes and macrophages in vitro. Specifically, iron oxide nanoparticles decreased radiation-induced apoptosis and prevented radiation-induced inhibition of human monocyte proliferative activity. Furthermore, Ferumoxytol protected monocytes from radiation-induced modulation of phenotype. For instance, while irradiation decreased polarization of monocytes to CD11b+CD14+ and CD11bnegCD14neg phenotype, Ferumoxytol prevented these effects. In macrophages, Ferumoxytol counteracted the ability of radiation to up-regulate cell polarization to CD11b+CD14+ phenotype and prevented radiation-induced down-regulation of expression of HLA-DR and CD86 molecules. Finally, Ferumoxytol uptake by human monocytes down-regulated expression of pro-inflammatory chemokines MIP-1α (Macrophage inflammatory protein 1α), MIP-1β (CCL4) and RANTES (CCL5). In macrophages, Ferumoxytol reversed the expression of IL-1RA, IL-8, IP-10 (CXCL10) and TNF-α, and up-regulates expression of MCP-1 (CCL2) and MIP-1α in irradiated macrophages. Conclusion. SPION agent Ferumoxytol increases resistance of human monocytes to radiation-induced cell death in vitro and supports anti-inflammatory phenotype of human macrophages under radiation. The effect is radiation dose-dependent and depends on the duration of Feraheme uptake. This study also finds strong evidence that SPIONs reversed the effect of radiation on the expression of pro-inflammatory cytokines involved in initiation and development of radiation-induced liver damage. Correlative translational work at our institution will directly assess the cyto-protective effects of Ferumoxytol on human Kupfer cells in vitro and ex vivo analysis of explanted liver specimens in a subset of patients receiving Feraheme-enhanced MRI-guided radiotherapy to the primary liver tumors as a bridge to liver transplant.Keywords: superparamagnetic iron oxide nanoparticles, radioprotection, magnetic resonance imaging, liver
Procedia PDF Downloads 733 Investigation of Delamination Process in Adhesively Bonded Hardwood Elements under Changing Environmental Conditions
Authors: M. M. Hassani, S. Ammann, F. K. Wittel, P. Niemz, H. J. Herrmann
Abstract:
Application of engineered wood, especially in the form of glued-laminated timbers has increased significantly. Recent progress in plywood made of high strength and high stiffness hardwoods, like European beech, gives designers in general more freedom by increased dimensional stability and load-bearing capacity. However, the strong hygric dependence of basically all mechanical properties renders many innovative ideas futile. The tendency of hardwood for higher moisture sorption and swelling coefficients lead to significant residual stresses in glued-laminated configurations, cross-laminated patterns in particular. These stress fields cause initiation and evolution of cracks in the bond-lines resulting in: interfacial de-bonding, loss of structural integrity, and reduction of load-carrying capacity. Subsequently, delamination of glued-laminated timbers made of hardwood elements can be considered as the dominant failure mechanism in such composite elements. In addition, long-term creep and mechano-sorption under changing environmental conditions lead to loss of stiffness and can amplify delamination growth over the lifetime of a structure even after decades. In this study we investigate the delamination process of adhesively bonded hardwood (European beech) elements subjected to changing climatic conditions. To gain further insight into the long-term performance of adhesively bonded elements during the design phase of new products, the development and verification of an authentic moisture-dependent constitutive model for various species is of great significance. Since up to now, a comprehensive moisture-dependent rheological model comprising all possibly emerging deformation mechanisms was missing, a 3D orthotropic elasto-plastic, visco-elastic, mechano-sorptive material model for wood, with all material constants being defined as a function of moisture content, was developed. Apart from the solid wood adherends, adhesive layer also plays a crucial role in the generation and distribution of the interfacial stresses. Adhesive substance can be treated as a continuum layer constructed from finite elements, represented as a homogeneous and isotropic material. To obtain a realistic assessment on the mechanical performance of the adhesive layer and a detailed look at the interfacial stress distributions, a generic constitutive model including all potentially activated deformation modes, namely elastic, plastic, and visco-elastic creep was developed. We focused our studies on the three most common adhesive systems for structural timber engineering: one-component polyurethane adhesive (PUR), melamine-urea-formaldehyde (MUF), and phenol-resorcinol-formaldehyde (PRF). The corresponding numerical integration approaches, with additive decomposition of the total strain are implemented within the ABAQUS FEM environment by means of user subroutine UMAT. To predict the true stress state, we perform a history dependent sequential moisture-stress analysis using the developed material models for both wood substrate and adhesive layer. Prediction of the delamination process is founded on the fracture mechanical properties of the adhesive bond-line, measured under different levels of moisture content and application of the cohesive interface elements. Finally, we compare the numerical predictions with the experimental observations of de-bonding in glued-laminated samples under changing environmental conditions.Keywords: engineered wood, adhesive, material model, FEM analysis, fracture mechanics, delamination
Procedia PDF Downloads 4372 Rapid Situation Assessment of Family Planning in Pakistan: Exploring Barriers and Realizing Opportunities
Authors: Waqas Abrar
Abstract:
Background: Pakistan is confronted with a formidable challenge to increase uptake of modern contraceptive methods. USAID, through its flagship Maternal and Child Survival Program (MCSP), in Pakistan is determined to support provincial Departments of Health and Population Welfare to increase the country's contraceptive prevalence rates (CPR) in Sindh, Punjab and Balochistan to achieve FP2020 goals. To inform program design and planning, a Rapid Situation Assessment (RSA) of family planning was carried out in Rawalpindi and Lahore districts in Punjab and Karachi district in Sindh. Methodology: The methodology consisted of comprehensive desk review of available literature and used a qualitative approach comprising of in-depth interviews (IDIs) and focus group discussions (FGDs). FGDs were conducted with community women, men, and mothers-in-law whereas IDIs were conducted with health facility in-charges/chiefs, healthcare providers, and community health workers. Results: Some of the oft-quoted reasons captured during desk review included poor quality of care at public sector facilities, affordability and accessibility in rural communities and providers' technical incompetence. Moreover, providers had inadequate knowledge of contraceptive methods and lacked counseling techniques; thereby, leading to dissatisfied clients and hence, discontinuation of contraceptive methods. These dissatisfied clients spread the myths and misconceptions about contraceptives in their respective communities which seriously damages community-level family planning efforts. Private providers were found reluctant to insert Intrauterine Contraceptive Devices (IUCDs) due to inadequate knowledge vis-à-vis post insertion issues/side effects. FGDs and IDIs unveiled multi-faceted reasons for poor contraceptives uptake. It was found that low education and socio-economic levels lead to low contraceptives uptake and mostly uneducated women rely on condoms provided by Lady Health Workers (LHWs). Providers had little or no knowledge about postpartum family planning or lactational amenorrhea. At community level family planning counseling sessions organized by LHWs and Male Mobilizers do not sensitize community men on permissibility of contraception in Islam. Many women attributed their physical ailments to the use of contraceptives. Lack of in-service training, job-aids and Information, Education and Communications (IEC) materials at facilities seriously comprise the quality of care in effective family planning service delivery. This is further compounded by frequent stock-outs of contraceptives at public healthcare facilities, poor data quality, false reporting, lack of data verification systems and follow-up. Conclusions: Some key conclusions from this assessment included capacity building of healthcare providers on long acting reversible contraceptives (LARCs) which give women contraception for a longer period. Secondly, capacity building of healthcare providers on postpartum family planning is an enormous challenge that can be best addressed through institutionalization. Thirdly, Providers should be equipped with counseling skills and techniques including inculcation of pros and cons of all contraceptive methods. Fourthly, printed materials such as job-aids and Information, Education and Communications (IEC) materials should be disseminated among healthcare providers and clients. These concluding statements helped MCSP to make informed decisions with regard to setting broad objectives of project and were duly approved by USAID.Keywords: capacity building, contraceptive prevalence rate, family planning, Institutionalization, Pakistan, postpartum care, postpartum family planning services
Procedia PDF Downloads 1551 Characterization of Aluminosilicates and Verification of Their Impact on Quality of Ceramic Proppants Intended for Shale Gas Output
Authors: Joanna Szymanska, Paulina Wawulska-Marek, Jaroslaw Mizera
Abstract:
Nowadays, the rapid growth of global energy consumption and uncontrolled depletion of natural resources become a serious problem. Shale rocks are the largest and potential global basins containing hydrocarbons, trapped in closed pores of the shale matrix. Regardless of the shales origin, mining conditions are extremely unfavourable due to high reservoir pressure, great depths, increased clay minerals content and limited permeability (nanoDarcy) of the rocks. Taking into consideration such geomechanical barriers, effective extraction of natural gas from shales with plastic zones demands effective operations. Actually, hydraulic fracturing is the most developed technique based on the injection of pressurized fluid into a wellbore, to initiate fractures propagation. However, a rapid drop of pressure after fluid suction to the ground induces a fracture closure and conductivity reduction. In order to minimize this risk, proppants should be applied. They are solid granules transported with hydraulic fluids to locate inside the rock. Proppants act as a prop for the closing fracture, thus gas migration to a borehole is effective. Quartz sands are commonly applied proppants only at shallow deposits (USA). Whereas, ceramic proppants are designed to meet rigorous downhole conditions to intensify output. Ceramic granules predominate with higher mechanical strength, stability in strong acidic environment, spherical shape and homogeneity as well. Quality of ceramic proppants is conditioned by raw materials selection. Aim of this study was to obtain the proppants from aluminosilicates (the kaolinite subgroup) and mix of minerals with a high alumina content. These loamy minerals contain a tubular and platy morphology that improves mechanical properties and reduces their specific weight. Moreover, they are distinguished by well-developed surface area, high porosity, fine particle size, superb dispersion and nontoxic properties - very crucial for particles consolidation into spherical and crush-resistant granules in mechanical granulation process. The aluminosilicates were mixed with water and natural organic binder to improve liquid-bridges and pores formation between particles. Afterward, the green proppants were subjected to sintering at high temperatures. Evaluation of the minerals utility was based on their particle size distribution (laser diffraction study) and thermal stability (thermogravimetry). Scanning Electron Microscopy was useful for morphology and shape identification combined with specific surface area measurement (BET). Chemical composition was verified by Energy Dispersive Spectroscopy and X-ray Fluorescence. Moreover, bulk density and specific weight were measured. Such comprehensive characterization of loamy materials confirmed their favourable impact on the proppants granulation. The sintered granules were analyzed by SEM to verify the surface topography and phase transitions after sintering. Pores distribution was identified by X-Ray Tomography. This method enabled also the simulation of proppants settlement in a fracture, while measurement of bulk density was essential to predict their amount to fill a well. Roundness coefficient was also evaluated, whereas impact on mining environment was identified by turbidity and solubility in acid - to indicate risk of the material decay in a well. The obtained outcomes confirmed a positive influence of the loamy minerals on ceramic proppants properties with respect to the strict norms. This research is perspective for higher quality proppants production with costs reduction.Keywords: aluminosilicates, ceramic proppants, mechanical granulation, shale gas
Procedia PDF Downloads 163