Search results for: expected return
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3733

Search results for: expected return

73 Diabetic Screening in Rural Lesotho, Southern Africa

Authors: Marie-Helena Docherty, Sion Edryd Williams

Abstract:

The prevalence of diabetes mellitus is increasing worldwide. In Sub-Saharan Africa, type 2 diabetes represents over 90% of all types of diabetes with the number of diabetic patients expected to rise. This represents a huge economic burden in an area already contending with high rates of other significant diseases, including the highest worldwide prevalence of HIV. Diabetic complications considerably impact on morbidity and mortality. The epidemiological data for the region quotes high rates of retinopathy (7-63%), neuropathy (27-66%) and microalbuminuria (10-83%). It is therefore imperative that diabetic screening programmes are established. It is recognised that in many parts of the developing world the implementation and management of such programmes is limited by a lack of available resources. The International Diabetes Federation produced guidelines in 2012 taking these limitations into account suggesting that all diabetic patients should have access to basic screening. These guidelines are consistent with the national diabetic guidelines produced by the Lesotho Medical Council. However, diabetic care in Lesotho is delivered at the local level, with variable levels of quality. A cross sectional study was performed in the outpatient department of Maluti Hospital in Mapoteng, Lesotho, a busy rural hospital in the Berea district. Demographic data on gender, age and modality of treatment were collected over a six-week time period. Information regarding 3 basic screening parameters was obtained. These parameters included eye screening (defined as a documented ophthalmology review within the last 12 months), foot screening (defined as a documented foot health assessment by any health care professional within the last 12 months) and secondary prevention (defined as a documented blood pressure and lipid profile reading within the last 12 months). These parameters were selected on the basis of the absolute minimum level of resources in Maluti Hospital. Renal screening was excluded, as the hospital does not have access to reliable renal profile checks or urinalysis. There is however a fully functioning on-site ophthalmology department run by a senior ophthalmologist with the ability to provide retinal photography, retinal surgery and photocoagulation therapy. Data was collected on 183 type 2 diabetics. 112 patients were male and 71 were female. The average age was 43 years. 4 patients were diet controlled, 140 patients were on oral hypoglycaemic agents (metformin and/or glibenclamide), and 39 patients were on a combination of insulin and oral hypoglycaemics. In the preceding 12 months, 5 patients had undergone eye screening (3%), 24 patients had undergone foot screening (13%), and 31 patients had lipid profile testing (17%). All patients had a documented blood pressure reading (100%). Our results show that screening is poorly performed in the basic indicators suggested by the IDF and the Lesotho Medical Council. On the basis of these results, a screening programme was developed using the mnemonic SaFE; secondary prevention, foot and eye care. This is simple, memorable and transferable between healthcare professionals. In the future, the expectation would be to expand upon this current programme to include renal screening, and to further develop screening pertaining to secondary prevention.

Keywords: Africa, complications, rural, screening

Procedia PDF Downloads 264
72 Learning Curve Effect on Materials Procurement Schedule of Multiple Sister Ships

Authors: Vijaya Dixit Aasheesh Dixit

Abstract:

Shipbuilding industry operates in Engineer Procure Construct (EPC) context. Product mix of a shipyard comprises of various types of ships like bulk carriers, tankers, barges, coast guard vessels, sub-marines etc. Each order is unique based on the type of ship and customized requirements, which are engineered into the product right from design stage. Thus, to execute every new project, a shipyard needs to upgrade its production expertise. As a result, over the long run, holistic learning occurs across different types of projects which contributes to the knowledge base of the shipyard. Simultaneously, in the short term, during execution of a project comprising of multiple sister ships, repetition of similar tasks leads to learning at activity level. This research aims to capture above learnings of a shipyard and incorporate learning curve effect in project scheduling and materials procurement to improve project performance. Extant literature provides support for the existence of such learnings in an organization. In shipbuilding, there are sequences of similar activities which are expected to exhibit learning curve behavior. For example, the nearly identical structural sub-blocks which are successively fabricated, erected, and outfitted with piping and electrical systems. Learning curve representation can model not only a decrease in mean completion time of an activity, but also a decrease in uncertainty of activity duration. Sister ships have similar material requirements. The same supplier base supplies materials for all the sister ships within a project. On one hand, this provides an opportunity to reduce transportation cost by batching the order quantities of multiple ships. On the other hand, it increases the inventory holding cost at shipyard and the risk of obsolescence. Further, due to learning curve effect the production scheduled of each consequent ship gets compressed. Thus, the material requirement schedule of every next ship differs from its previous ship. As more and more ships get constructed, compressed production schedules increase the possibility of batching the orders of sister ships. This work aims at integrating materials management with project scheduling of long duration projects for manufacturing of multiple sister ships. It incorporates the learning curve effect on progressively compressing material requirement schedules and addresses the above trade-off of transportation cost and inventory holding and shortage costs while satisfying budget constraints of various stages of the project. The activity durations and lead time of items are not crisp and are available in the form of probabilistic distribution. A Stochastic Mixed Integer Programming (SMIP) model is formulated which is solved using evolutionary algorithm. Its output provides ordering dates of items and degree of order batching for all types of items. Sensitivity analysis determines the threshold number of sister ships required in a project to leverage the advantage of learning curve effect in materials management decisions. This analysis will help materials managers to gain insights about the scenarios: when and to what degree is it beneficial to treat a multiple ship project as an integrated one by batching the order quantities and when and to what degree to practice distinctive procurement for individual ship.

Keywords: learning curve, materials management, shipbuilding, sister ships

Procedia PDF Downloads 475
71 Human Identification and Detection of Suspicious Incidents Based on Outfit Colors: Image Processing Approach in CCTV Videos

Authors: Thilini M. Yatanwala

Abstract:

CCTV (Closed-Circuit-Television) Surveillance System is being used in public places over decades and a large variety of data is being produced every moment. However, most of the CCTV data is stored in isolation without having integrity. As a result, identification of the behavior of suspicious people along with their location has become strenuous. This research was conducted to acquire more accurate and reliable timely information from the CCTV video records. The implemented system can identify human objects in public places based on outfit colors. Inter-process communication technologies were used to implement the CCTV camera network to track people in the premises. The research was conducted in three stages and in the first stage human objects were filtered from other movable objects available in public places. In the second stage people were uniquely identified based on their outfit colors and in the third stage an individual was continuously tracked in the CCTV network. A face detection algorithm was implemented using cascade classifier based on the training model to detect human objects. HAAR feature based two-dimensional convolution operator was introduced to identify features of the human face such as region of eyes, region of nose and bridge of the nose based on darkness and lightness of facial area. In the second stage outfit colors of human objects were analyzed by dividing the area into upper left, upper right, lower left, lower right of the body. Mean color, mod color and standard deviation of each area were extracted as crucial factors to uniquely identify human object using histogram based approach. Color based measurements were written in to XML files and separate directories were maintained to store XML files related to each camera according to time stamp. As the third stage of the approach, inter-process communication techniques were used to implement an acknowledgement based CCTV camera network to continuously track individuals in a network of cameras. Real time analysis of XML files generated in each camera can determine the path of individual to monitor full activity sequence. Higher efficiency was achieved by sending and receiving acknowledgments only among adjacent cameras. Suspicious incidents such as a person staying in a sensitive area for a longer period or a person disappeared from the camera coverage can be detected in this approach. The system was tested for 150 people with the accuracy level of 82%. However, this approach was unable to produce expected results in the presence of group of people wearing similar type of outfits. This approach can be applied to any existing camera network without changing the physical arrangement of CCTV cameras. The study of human identification and suspicious incident detection using outfit color analysis can achieve higher level of accuracy and the project will be continued by integrating motion and gait feature analysis techniques to derive more information from CCTV videos.

Keywords: CCTV surveillance, human detection and identification, image processing, inter-process communication, security, suspicious detection

Procedia PDF Downloads 153
70 Heat Transfer Phenomena Identification of a Non-Active Floor in a Stack-Ventilated Building in Summertime: Empirical Study

Authors: Miguel Chen Austin, Denis Bruneau, Alain Sempey, Laurent Mora, Alain Sommier

Abstract:

An experimental study in a Plus Energy House (PEH) prototype was conducted in August 2016. It aimed to highlight the energy charge and discharge of a concrete-slab floor submitted to the day-night-cycles heat exchanges in the southwestern part of France and to identify the heat transfer phenomena that take place in both processes: charge and discharge. The main features of this PEH, significant to this study, are the following: (i) a non-active slab covering the major part of the entire floor surface of the house, which include a concrete layer 68 mm thick as upper layer; (ii) solar window shades located on the north and south facades along with a large eave facing south, (iii) large double-glazed windows covering the majority of the south facade, (iv) a natural ventilation system (NVS) composed by ten automatized openings with different dimensions: four are located on the south facade, four on the north facade and two on the shed roof (north-oriented). To highlight the energy charge and discharge processes of the non-active slab, heat flux and temperature measurement techniques were implemented, along with airspeed measurements. Ten “measurement-poles” (MP) were distributed all over the concrete-floor surface. Each MP represented a zone of measurement, where air and surface temperatures, and convection and radiation heat fluxes, were intended to be measured. The airspeed was measured only at two points over the slab surface, near the south facade. To identify the heat transfer phenomena that take part in the charge and discharge process, some relevant dimensionless parameters were used, along with statistical analysis; heat transfer phenomena were identified based on this analysis. Experimental data, after processing, had shown that two periods could be identified at a glance: charge (heat gain, positive values) and discharge (heat losses, negative values). During the charge period, on the floor surface, radiation heat exchanges were significantly higher compared with convection. On the other hand, convection heat exchanges were significantly higher than radiation, in the discharge period. Spatially, both, convection and radiation heat exchanges are higher near the natural ventilation openings and smaller far from them, as expected. Experimental correlations have been determined using a linear regression model, showing the relation between the Nusselt number with relevant parameters: Peclet, Rayleigh, and Richardson numbers. This has led to the determination of the convective heat transfer coefficient and its comparison with the convective heat coefficient resulting from measurements. Results have shown that forced and natural convection coexists during the discharge period; more accurate correlations with the Peclet number than with the Rayleigh number, have been found. This may suggest that forced convection is stronger than natural convection. Yet, airspeed levels encountered suggest that it is natural convection that should take place rather than forced convection. Despite this, Richardson number values encountered indicate otherwise. During the charge period, air-velocity levels might indicate that none air motion occurs, which might lead to heat transfer by diffusion instead of convection.

Keywords: heat flux measurement, natural ventilation, non-active concrete slab, plus energy house

Procedia PDF Downloads 394
69 A Textile-Based Scaffold for Skin Replacements

Authors: Tim Bolle, Franziska Kreimendahl, Thomas Gries, Stefan Jockenhoevel

Abstract:

The therapeutic treatment of extensive, deep wounds is limited. Autologous split-skin grafts are used as a so-called ‘gold standard’. Most common deficits are the defects at the donor site, the risk of scarring as well as the limited availability and quality of the autologous grafts. The aim of this project is a tissue engineered dermal-epidermal skin replacement to overcome the limitations of the gold standard. A key requirement for the development of such a three-dimensional implant is the formation of a functional capillary-like network inside the implant to ensure a sufficient nutrient and gas supply. Tailored three-dimensional warp knitted spacer fabrics are used to reinforce the mechanically week fibrin gel-based scaffold and further to create a directed in vitro pre-vascularization along the parallel-oriented pile yarns within a co-culture. In this study various three-dimensional warp knitted spacer fabrics were developed in a factorial design to analyze the influence of the machine parameters such as the stitch density and the pattern of the fabric on the scaffold performance and further to determine suitable parameters for a successful fibrin gel-incorporation and a physiological performance of the scaffold. The fabrics were manufactured on a Karl Mayer double-bar raschel machine DR 16 EEC/EAC. A fine machine gauge of E30 was used to ensure a high pile yarn density for sufficient nutrient, gas and waste exchange. In order to ensure a high mechanical stability of the graft, the fabrics were made of biocompatible PVDF yarns. Key parameters such as the pore size, porosity and stress/strain behavior were investigated under standardized, controlled climate conditions. The influence of the input parameters on the mechanical and morphological properties as well as the ability of fibrin gel incorporation into the spacer fabric was analyzed. Subsequently, the pile yarns of the spacer fabrics were colonized with Human Umbilical Vein Endothelial Cells (HUVEC) to analyze the ability of the fabric to further function as a guiding structure for a directed vascularization. The cells were stained with DAPI and investigated using fluorescence microscopy. The analysis revealed that the stitch density and the binding pattern have a strong influence on both the mechanical and morphological properties of the fabric. As expected, the incorporation of the fibrin gel was significantly improved with higher pore sizes and porosities, whereas the mechanical strength decreases. Furthermore, the colonization trials revealed a high cell distribution and density on the pile yarns of the spacer fabrics. For a tailored reinforcing structure, the minimum porosity and pore size needs to be evaluated which still ensures a complete incorporation of the reinforcing structure into the fibrin gel matrix. That will enable a mechanically stable dermal graft with a dense vascular network for a sufficient nutrient and oxygen supply of the cells. The results are promising for subsequent research in the field of reinforcing mechanically weak biological scaffolds and develop functional three-dimensional scaffolds with an oriented pre-vascularization.

Keywords: fibrin-gel, skin replacement, spacer fabric, pre-vascularization

Procedia PDF Downloads 231
68 Comparing Test Equating by Item Response Theory and Raw Score Methods with Small Sample Sizes on a Study of the ARTé: Mecenas Learning Game

Authors: Steven W. Carruthers

Abstract:

The purpose of the present research is to equate two test forms as part of a study to evaluate the educational effectiveness of the ARTé: Mecenas art history learning game. The researcher applied Item Response Theory (IRT) procedures to calculate item, test, and mean-sigma equating parameters. With the sample size n=134, test parameters indicated “good” model fit but low Test Information Functions and more acute than expected equating parameters. Therefore, the researcher applied equipercentile equating and linear equating to raw scores and compared the equated form parameters and effect sizes from each method. Item scaling in IRT enables the researcher to select a subset of well-discriminating items. The mean-sigma step produces a mean-slope adjustment from the anchor items, which was used to scale the score on the new form (Form R) to the reference form (Form Q) scale. In equipercentile equating, scores are adjusted to align the proportion of scores in each quintile segment. Linear equating produces a mean-slope adjustment, which was applied to all core items on the new form. The study followed a quasi-experimental design with purposeful sampling of students enrolled in a college level art history course (n=134) and counterbalancing design to distribute both forms on the pre- and posttests. The Experimental Group (n=82) was asked to play ARTé: Mecenas online and complete Level 4 of the game within a two-week period; 37 participants completed Level 4. Over the same period, the Control Group (n=52) did not play the game. The researcher examined between group differences from post-test scores on test Form Q and Form R by full-factorial Two-Way ANOVA. The raw score analysis indicated a 1.29% direct effect of form, which was statistically non-significant but may be practically significant. The researcher repeated the between group differences analysis with all three equating methods. For the IRT mean-sigma adjusted scores, form had a direct effect of 8.39%. Mean-sigma equating with a small sample may have resulted in inaccurate equating parameters. Equipercentile equating aligned test means and standard deviations, but resultant skewness and kurtosis worsened compared to raw score parameters. Form had a 3.18% direct effect. Linear equating produced the lowest Form effect, approaching 0%. Using linearly equated scores, the researcher conducted an ANCOVA to examine the effect size in terms of prior knowledge. The between group effect size for the Control Group versus Experimental Group participants who completed the game was 14.39% with a 4.77% effect size attributed to pre-test score. Playing and completing the game increased art history knowledge, and individuals with low prior knowledge tended to gain more from pre- to post test. Ultimately, researchers should approach test equating based on their theoretical stance on Classical Test Theory and IRT and the respective  assumptions. Regardless of the approach or method, test equating requires a representative sample of sufficient size. With small sample sizes, the application of a range of equating approaches can expose item and test features for review, inform interpretation, and identify paths for improving instruments for future study.

Keywords: effectiveness, equipercentile equating, IRT, learning games, linear equating, mean-sigma equating

Procedia PDF Downloads 172
67 The Impact of Spirituality on the Voluntary Simplicity Lifestyle Tendency: An Explanatory Study on Turkish Consumers

Authors: Esna B. Buğday, Niray Tunçel

Abstract:

Spirituality has a motivational influence on consumers' psychological states, lifestyles, and behavioral intentions. Spirituality refers to the feeling that there is a divine power greater than ourselves and a connection among oneself, others, nature, and the sacred. In addition, spirituality concerns the human soul and spirit against the material and physical world and consists of three dimensions: self-discovery, relationships, and belief in a higher power. Of them, self-discovery is to explore the meaning and the purpose of life. Relationships refer to the awareness of the connection between human beings and nature as well as respect for them. In addition, higher power represents the transcendent aspect of spirituality, which means to believe in a holy power that creates all the systems in the universe. Furthermore, a voluntary simplicity lifestyle is (1) to adopt a simple lifestyle by minimizing the attachment to and the consumption of material things and possessions, (2) to have an ecological awareness respecting all living creatures, and (3) to express the desire for exploring and developing the inner life. Voluntary simplicity is a multi-dimensional construct that consists of a desire for a voluntarily simple life (e.g., avoiding excessive consumption), cautious attitudes in shopping (e.g., not buying unnecessary products), acceptance of self-sufficiency (e.g., being self-sufficient individual), and rejection of highly developed functions of products (e.g., preference for simple functioned products). One of the main reasons for living simply is to sustain a spiritual life, as voluntary simplicity provides the space for achieving psychological and spiritual growth, cultivating self-reliance since voluntary simplifier frees themselves from the overwhelming externals and takes control of their daily lives. From this point of view, it is expected that people with a strong sense of spirituality will be likely to adopt a simple lifestyle. In this respect, the study aims to examine the impact of spirituality on consumers' voluntary simple lifestyle tendencies. As consumers' consumption attitudes and behaviors depend on their lifestyles, exploring the factors that lead them to embrace voluntary simplicity significantly predicts their purchase behavior. In this respect, this study presents empirical research based on a data set collected from 478 Turkish consumers through an online survey. First, exploratory factor analysis is applied to the data to reveal the dimensions of spirituality and voluntary simplicity scales. Second, confirmatory factor analysis is conducted to assess the measurement model. Last, the hypotheses are analyzed using partial least square structural equation modeling (PLS-SEM). The results confirm that spirituality's self-discovery and relationships dimensions positively impact both cautious attitudes in shopping and acceptance of self-sufficiency dimensions of voluntary simplicity. In contrast, belief in a higher power does not significantly influence consumers' voluntary simplicity tendencies. Even though there has been theoretical support drawing a positive relationship between spirituality and voluntary simplicity, to the best of the authors' knowledge, this has not been empirically tested in the literature before. Hence, this study contributes to the current knowledge by analyzing the direct influence of spirituality on consumers' voluntary simplicity tendencies. Additionally, analyzing this impact on the consumers of an emerging market is another contribution to the literature.

Keywords: spirituality, voluntary simplicity, self-sufficiency, conscious shopping, Turkish consumers

Procedia PDF Downloads 131
66 Techno-Economic Assessment of Distributed Heat Pumps Integration within a Swedish Neighborhood: A Cosimulation Approach

Authors: Monica Arnaudo, Monika Topel, Bjorn Laumert

Abstract:

Within the Swedish context, the current trend of relatively low electricity prices promotes the electrification of the energy infrastructure. The residential heating sector takes part in this transition by proposing a switch from a centralized district heating system towards a distributed heat pumps-based setting. When it comes to urban environments, two issues arise. The first, seen from an electricity-sector perspective, is related to the fact that existing networks are limited with regards to their installed capacities. Additional electric loads, such as heat pumps, can cause severe overloads on crucial network elements. The second, seen from a heating-sector perspective, has to do with the fact that the indoor comfort conditions can become difficult to handle when the operation of the heat pumps is limited by a risk of overloading on the distribution grid. Furthermore, the uncertainty of the electricity market prices in the future introduces an additional variable. This study aims at assessing the extent to which distributed heat pumps can penetrate an existing heat energy network while respecting the technical limitations of the electricity grid and the thermal comfort levels in the buildings. In order to account for the multi-disciplinary nature of this research question, a cosimulation modeling approach was adopted. In this way, each energy technology is modeled in its customized simulation environment. As part of the cosimulation methodology: a steady-state power flow analysis in pandapower was used for modeling the electrical distribution grid, a thermal balance model of a reference building was implemented in EnergyPlus to account for space heating and a fluid-cycle model of a heat pump was implemented in JModelica to account for the actual heating technology. With the models set in place, different scenarios based on forecasted electricity market prices were developed both for present and future conditions of Hammarby Sjöstad, a neighborhood located in the south-east of Stockholm (Sweden). For each scenario, the technical and the comfort conditions were assessed. Additionally, the average cost of heat generation was estimated in terms of levelized cost of heat. This indicator enables a techno-economic comparison study among the different scenarios. In order to evaluate the levelized cost of heat, a yearly performance simulation of the energy infrastructure was implemented. The scenarios related to the current electricity prices show that distributed heat pumps can replace the district heating system by covering up to 30% of the heating demand. By lowering of 2°C, the minimum accepted indoor temperature of the apartments, this level of penetration can increase up to 40%. Within the future scenarios, if the electricity prices will increase, as most likely expected within the next decade, the penetration of distributed heat pumps can be limited to 15%. In terms of levelized cost of heat, a residential heat pump technology becomes competitive only within a scenario of decreasing electricity prices. In this case, a district heating system is characterized by an average cost of heat generation 7% higher compared to a distributed heat pumps option.

Keywords: cosimulation, distributed heat pumps, district heating, electrical distribution grid, integrated energy systems

Procedia PDF Downloads 126
65 Fair Federated Learning in Wireless Communications

Authors: Shayan Mohajer Hamidi

Abstract:

Federated Learning (FL) has emerged as a promising paradigm for training machine learning models on distributed data without the need for centralized data aggregation. In the realm of wireless communications, FL has the potential to leverage the vast amounts of data generated by wireless devices to improve model performance and enable intelligent applications. However, the fairness aspect of FL in wireless communications remains largely unexplored. This abstract presents an idea for fair federated learning in wireless communications, addressing the challenges of imbalanced data distribution, privacy preservation, and resource allocation. Firstly, the proposed approach aims to tackle the issue of imbalanced data distribution in wireless networks. In typical FL scenarios, the distribution of data across wireless devices can be highly skewed, resulting in unfair model updates. To address this, we propose a weighted aggregation strategy that assigns higher importance to devices with fewer samples during the aggregation process. By incorporating fairness-aware weighting mechanisms, the proposed approach ensures that each participating device's contribution is proportional to its data distribution, thereby mitigating the impact of data imbalance on model performance. Secondly, privacy preservation is a critical concern in federated learning, especially in wireless communications where sensitive user data is involved. The proposed approach incorporates privacy-enhancing techniques, such as differential privacy, to protect user privacy during the model training process. By adding carefully calibrated noise to the gradient updates, the proposed approach ensures that the privacy of individual devices is preserved without compromising the overall model accuracy. Moreover, the approach considers the heterogeneity of devices in terms of computational capabilities and energy constraints, allowing devices to adaptively adjust the level of privacy preservation to strike a balance between privacy and utility. Thirdly, efficient resource allocation is crucial for federated learning in wireless communications, as devices operate under limited bandwidth, energy, and computational resources. The proposed approach leverages optimization techniques to allocate resources effectively among the participating devices, considering factors such as data quality, network conditions, and device capabilities. By intelligently distributing the computational load, communication bandwidth, and energy consumption, the proposed approach minimizes resource wastage and ensures a fair and efficient FL process in wireless networks. To evaluate the performance of the proposed fair federated learning approach, extensive simulations and experiments will be conducted. The experiments will involve a diverse set of wireless devices, ranging from smartphones to Internet of Things (IoT) devices, operating in various scenarios with different data distributions and network conditions. The evaluation metrics will include model accuracy, fairness measures, privacy preservation, and resource utilization. The expected outcomes of this research include improved model performance, fair allocation of resources, enhanced privacy preservation, and a better understanding of the challenges and solutions for fair federated learning in wireless communications. The proposed approach has the potential to revolutionize wireless communication systems by enabling intelligent applications while addressing fairness concerns and preserving user privacy.

Keywords: federated learning, wireless communications, fairness, imbalanced data, privacy preservation, resource allocation, differential privacy, optimization

Procedia PDF Downloads 50
64 Affordable and Environmental Friendly Small Commuter Aircraft Improving European Mobility

Authors: Diego Giuseppe Romano, Gianvito Apuleo, Jiri Duda

Abstract:

Mobility is one of the most important societal needs for amusement, business activities and health. Thus, transport needs are continuously increasing, with the consequent traffic congestion and pollution increase. Aeronautic effort aims at smarter infrastructures use and in introducing greener concepts. A possible solution to address the abovementioned topics is the development of Small Air Transport (SAT) system, able to guarantee operability from today underused airfields in an affordable and green way, helping meanwhile travel time reduction, too. In the framework of Horizon2020, EU (European Union) has funded the Clean Sky 2 SAT TA (Transverse Activity) initiative to address market innovations able to reduce SAT operational cost and environmental impact, ensuring good levels of operational safety. Nowadays, most of the key technologies to improve passenger comfort and to reduce community noise, DOC (Direct Operating Costs) and pilot workload for SAT have reached an intermediate level of maturity TRL (Technology Readiness Level) 3/4. Thus, the key technologies must be developed, validated and integrated on dedicated ground and flying aircraft demonstrators to reach higher TRL levels (5/6). Particularly, SAT TA focuses on the integration at aircraft level of the following technologies [1]: 1)    Low-cost composite wing box and engine nacelle using OoA (Out of Autoclave) technology, LRI (Liquid Resin Infusion) and advance automation process. 2) Innovative high lift devices, allowing aircraft operations from short airfields (< 800 m). 3) Affordable small aircraft manufacturing of metallic fuselage using FSW (Friction Stir Welding) and LMD (Laser Metal Deposition). 4)       Affordable fly-by-wire architecture for small aircraft (CS23 certification rules). 5) More electric systems replacing pneumatic and hydraulic systems (high voltage EPGDS -Electrical Power Generation and Distribution System-, hybrid de-ice system, landing gear and brakes). 6) Advanced avionics for small aircraft, reducing pilot workload. 7) Advanced cabin comfort with new interiors materials and more comfortable seats. 8) New generation of turboprop engine with reduced fuel consumption, emissions, noise and maintenance costs for 19 seats aircraft. (9) Alternative diesel engine for 9 seats commuter aircraft. To address abovementioned market innovations, two different platforms have been designed: Reference and Green aircraft. Reference aircraft is a virtual aircraft designed considering 2014 technologies with an existing engine assuring requested take-off power; Green aircraft is designed integrating the technologies addressed in Clean Sky 2. Preliminary integration of the proposed technologies shows an encouraging reduction of emissions and operational costs of small: about 20% CO2 reduction, about 24% NOx reduction, about 10 db (A) noise reduction at measurement point and about 25% DOC reduction. Detailed description of the performed studies, analyses and validations for each technology as well as the expected benefit at aircraft level are reported in the present paper.

Keywords: affordable, European, green, mobility, technologies development, travel time reduction

Procedia PDF Downloads 79
63 Impact of Water Interventions under WASH Program in the South-west Coastal Region of Bangladesh

Authors: S. M. Ashikur Elahee, Md. Zahidur Rahman, Md. Shofiqur Rahman

Abstract:

This study evaluated the impact of different water interventions under WASH program on access of household's to safe drinking water. Following survey method, the study was carried out in two Upazila of South-west coastal region of Bangladesh namely Koyra from Khulna and Shymnagar from Satkhira district. Being an explanatory study, a total of 200 household's selected applying random sampling technique were interviewed using a structured interview schedule. The predicted probability suggests that around 62 percent household's are out of year-round access to safe drinking water whereby, only 25 percent household's have access at SPHERE standard (913 Liters/per person/per year). Besides, majority (78 percent) of the household's have not accessed at both indicators simultaneously. The distance from household residence to the water source varies from 0 to 25 kilometer with an average distance of 2.03 kilometers. The study also reveals that the increase in monthly income around BDT 1,000 leads to additional 11 liters (coefficient 0.01 at p < 0.1) consumption of safe drinking water for a person/year. As expected, lining up time has significant negative relationship with dependent variables i.e., for higher lining up time, the probability of getting access for both SPHERE standard and year round access variables becomes lower. According to ordinary least square (OLS) regression results, water consumption decreases at 93 liters for per person/year of a household if one member is added to that household. Regarding water consumption intensity, ordered logistic regression (OLR) model shows that one-minute increase of lining up time for water collection tends to reduce water consumption intensity. On the other hand, as per OLS regression results, for one-minute increase of lining up time, the water consumption decreases by around 8 liters. Considering access to Deep Tube Well (DTW) as a reference dummy, in OLR, the household under Pond Sand Filter (PSF), Shallow Tube Well (STW), Reverse Osmosis (RO) and Rainwater Harvester System (RWHS) are respectively 37 percent, 29 percent, 61 percent and 27 percent less likely to ensure year round access of water consumption. In line of health impact, different type of water born diseases like diarrhea, cholera, and typhoid are common among the coastal community caused by microbial impurities i.e., Bacteria, Protozoa. High turbidity and TDS in pond water caused by reduction of water depth, presence of suspended particle and inorganic salt stimulate the growth of bacteria, protozoa, and algae causes affecting health hazard. Meanwhile, excessive growth of Algae in pond water caused by excessive nitrate in drinking water adversely effects on child health. In lieu of ensuring access at SPHERE standard, we need to increase the number of water interventions at reasonable distance, preferably a half kilometer away from the dwelling place, ensuring community peoples involved with its installation process where collectively owned water intervention is found more effective than privately owned. In addition, a demand-responsive approach to supply of piped water should be adopted to allow consumer demand to guide investment in domestic water supply in future.

Keywords: access, impact, safe drinking water, Sphere standard, water interventions

Procedia PDF Downloads 194
62 Stent Surface Functionalisation via Plasma Treatment to Promote Fast Endothelialisation

Authors: Irene Carmagnola, Valeria Chiono, Sandra Pacharra, Jochen Salber, Sean McMahon, Chris Lovell, Pooja Basnett, Barbara Lukasiewicz, Ipsita Roy, Xiang Zhang, Gianluca Ciardelli

Abstract:

Thrombosis and restenosis after stenting procedure can be prevented by promoting fast stent wall endothelialisation. It is well known that surface functionalisation with antifouling molecules combining with extracellular matrix proteins is a promising strategy to design biomimetic surfaces able to promote fast endothelialization. In particular, REDV has gained much attention for the ability to enhance rapid endothelialization due to its specific affinity with endothelial cells (ECs). In this work, a two-step plasma treatment was performed to polymerize a thin layer of acrylic acid, used to subsequently graft PEGylated-REDV and polyethylene glycol (PEG) at different molar ratio with the aim to selectively promote endothelial cell adhesion avoiding platelet activation. PEGylate-REDV was provided by Biomatik and it is formed by 6 PEG monomer repetitions (Chempep Inc.), with an NH2 terminal group. PEG polymers were purchased from Chempep Inc. with two different chain lengths: m-PEG6-NH2 (295.4 Da) with 6 monomer repetitions and m-PEG12-NH2 (559.7 Da) with 12 monomer repetitions. Plasma activation was obtained by operating at 50W power, 5 min of treatment and at an Ar flow rate of 20 sccm. Pure acrylic acid (99%, AAc) vapors were diluted in Ar (flow = 20 sccm) and polymerized by a pulsed plasma discharge applying a discharge RF power of 200 W, a duty cycle of 10% (on time = 10 ms, off time = 90 ms) for 10 min. After plasma treatment, samples were dipped into an 1-(3-dimethylaminopropyl)-3- ethylcarbodiimide (EDC)/N-hydroxysuccinimide (NHS) solution (ratio 4:1, pH 5.5) for 1 h at 4°C and subsequently dipped in PEGylate-REDV and PEGylate-REDV:PEG solutions at different molar ratio (100 μg/mL in PBS) for 20 h at room temperature. Surface modification was characterized through physico-chemical analyses and in vitro cell tests. PEGylated-REDV peptide and PEG were successfully bound to the carboxylic groups that are formed on the polymer surface after plasma reaction. FTIR-ATR spectroscopy, X -ray Photoelectron Spectroscopy (XPS) and contact angle measurement gave a clear indication of the presence of the grafted molecules. The use of PEG as a spacer allowed for an increase in wettability of the surface, and the effect was more evident by increasing the amount of PEG. Endothelial cells adhered and spread well on the surfaces functionalized with the REDV sequence. In conclusion, a selective coating able to promote a new endothelial cell layer on polymeric stent surface was developed. In particular, a thin AAc film was polymerised on the polymeric surface in order to expose –COOH groups, and PEGylate-REDV and PEG were successful grafted on the polymeric substrates. The REDV peptide demonstrated to encourage cell adhesion with a consequent, expected improvement of the hemocompatibility of these polymeric surfaces in vivo. Acknowledgements— This work was funded by the European Commission 7th Framework Programme under grant agreement number 604251- ReBioStent (Reinforced Bioresorbable Biomaterials for Therapeutic Drug Eluting Stents). The authors thank all the ReBioStent partners for their support in this work.

Keywords: endothelialisation, plasma treatment, stent, surface functionalisation

Procedia PDF Downloads 283
61 Predictive Analytics for Theory Building

Authors: Ho-Won Jung, Donghun Lee, Hyung-Jin Kim

Abstract:

Predictive analytics (data analysis) uses a subset of measurements (the features, predictor, or independent variable) to predict another measurement (the outcome, target, or dependent variable) on a single person or unit. It applies empirical methods in statistics, operations research, and machine learning to predict the future, or otherwise unknown events or outcome on a single or person or unit, based on patterns in data. Most analyses of metabolic syndrome are not predictive analytics but statistical explanatory studies that build a proposed model (theory building) and then validate metabolic syndrome predictors hypothesized (theory testing). A proposed theoretical model forms with causal hypotheses that specify how and why certain empirical phenomena occur. Predictive analytics and explanatory modeling have their own territories in analysis. However, predictive analytics can perform vital roles in explanatory studies, i.e., scientific activities such as theory building, theory testing, and relevance assessment. In the context, this study is to demonstrate how to use our predictive analytics to support theory building (i.e., hypothesis generation). For the purpose, this study utilized a big data predictive analytics platform TM based on a co-occurrence graph. The co-occurrence graph is depicted with nodes (e.g., items in a basket) and arcs (direct connections between two nodes), where items in a basket are fully connected. A cluster is a collection of fully connected items, where the specific group of items has co-occurred in several rows in a data set. Clusters can be ranked using importance metrics, such as node size (number of items), frequency, surprise (observed frequency vs. expected), among others. The size of a graph can be represented by the numbers of nodes and arcs. Since the size of a co-occurrence graph does not depend directly on the number of observations (transactions), huge amounts of transactions can be represented and processed efficiently. For a demonstration, a total of 13,254 metabolic syndrome training data is plugged into the analytics platform to generate rules (potential hypotheses). Each observation includes 31 predictors, for example, associated with sociodemographic, habits, and activities. Some are intentionally included to get predictive analytics insights on variable selection such as cancer examination, house type, and vaccination. The platform automatically generates plausible hypotheses (rules) without statistical modeling. Then the rules are validated with an external testing dataset including 4,090 observations. Results as a kind of inductive reasoning show potential hypotheses extracted as a set of association rules. Most statistical models generate just one estimated equation. On the other hand, a set of rules (many estimated equations from a statistical perspective) in this study may imply heterogeneity in a population (i.e., different subpopulations with unique features are aggregated). Next step of theory development, i.e., theory testing, statistically tests whether a proposed theoretical model is a plausible explanation of a phenomenon interested in. If hypotheses generated are tested statistically with several thousand observations, most of the variables will become significant as the p-values approach zero. Thus, theory validation needs statistical methods utilizing a part of observations such as bootstrap resampling with an appropriate sample size.

Keywords: explanatory modeling, metabolic syndrome, predictive analytics, theory building

Procedia PDF Downloads 247
60 Functions and Challenges of New County-Based Regional Plan in Taiwan

Authors: Yu-Hsin Tsai

Abstract:

A new, mandated county regional plan system has been initiated since 2010 nationwide in Taiwan, with its role situated in-between the policy-led cross-county regional plan and the blueprint-led city plan. This new regional plan contain both urban and rural areas in one single plan, which provides a more complete planning territory, i.e., city region within the county’s jurisdiction, and to be executed and managed effectively by the county government. However, the full picture of its functions and characteristics seems still not totally clear, compared with other levels of plans; either are planning goals and issues that can be most appropriately dealt with at this spatial scale. In addition, the extent to which the inclusion of sustainability ideal and measures to cope with climate change are unclear. Based on the above issues, this study aims to clarify the roles of county regional plan, to analyze the extent to which the measures cope with sustainability, climate change, and forecasted declining population, and the success factors and issues faced in the planning process. The methodology applied includes literature review, plan quality evaluation, and interview with officials of the central and local governments and urban planners involved for all the 23 counties in Taiwan. The preliminary research results show, first, growth management related policies have been widely implemented and expected to have effective impact, including incorporating resources capacity to determine maximum population for the city region as a whole, developing overall vision of urban growth boundary for all the whole city region, prioritizing infill development, and use of architectural land within urbanized area over rural area to cope with urban growth. Secondly, planning-oriented zoning is adopted in urban areas, while demand-oriented planning permission is applied in the rural areas with designated plans. Then, public participation has been evolved to the next level to oversee all of government’s planning and review processes due to the decreasing trust in the government, and development of public forum on the internet etc. Next, fertile agricultural land is preserved to maintain food self-supplied goal for national security concern. More adoption-based methods than mitigation-based methods have been applied to cope with global climate change. Finally, better land use and transportation planning in terms of avoiding developing rail transit stations and corridor in rural area is promoted. Even though many promising, prompt measures have been adopted, however, challenges exist to surround: first, overall urban density, likely affecting success of UGB, or use of rural agricultural land, has not been incorporated, possibly due to implementation difficulties. Second, land-use related measures to mitigating climate change seem less clear and hence less employed. Smart decline has not drawn enough attention to cope with predicted population decrease in the next decade. Then, some reluctance from county’s government to implement county regional plan can be observed vaguely possibly since limits have be set on further development on agricultural land and sensitive areas. Finally, resolving issue on existing illegal factories on agricultural land remains the most challenging dilemma.

Keywords: city region plan, sustainability, global climate change, growth management

Procedia PDF Downloads 323
59 Continued usage of Wearable FItness Technology: An Extended UTAUT2 Model Perspective

Authors: Rasha Elsawy

Abstract:

Aside from the rapid growth of global information technology and the Internet, another key trend is the swift proliferation of wearable technologies. The future of wearable technologies is very bright as an emerging revolution in this technological world. Beyond this, individual continuance intention toward IT is an important area that drew academics' and practitioners' attention. The literature review exhibits that continuance usage is an important concern that needs to be addressed for any technology to be advantageous and for consumers to succeed. However, consumers noticeably abandon their wearable devices soon after purchase, losing all subsequent benefits that can only be achieved through continued usage. Purpose-This thesis aims to develop an integrated model designed to explain and predict consumers' behavioural intention(BI) and continued use (CU) of wearable fitness technology (WFT) to identify the determinants of the CU of technology. Because of this, the question arises as to whether there are differences between technology adoption and post-adoption (CU) factors. Design/methodology/approach- The study employs the unified theory of acceptance and use of technology2 (UTAUT2), which has the best explanatory power, as an underpinning framework—extending it with further factors, along with user-specific personal characteristics as moderators. All items will be adapted from previous literature and slightly modified according to the WFT/SW context. A longitudinal investigation will be carried out to examine the research model, wherein a survey will include these constructs involved in the conceptual model. A quantitative approach based on a questionnaire survey will collect data from existing wearable technology users. Data will be analysed using the structural equation modelling (SEM) method based on IBM SPSS statistics and AMOS 28.0. Findings- The research findings will provide unique perspectives on user behaviour, intention, and actual continuance usage when accepting WFT. Originality/value- Unlike previous works, the current thesis comprehensively explores factors that affect consumers' decisions to continue using wearable technology. That is influenced by technological/utilitarian, affective, emotional, psychological, and social factors, along with the role of proposed moderators. That novel research framework is proposed by extending the UTAUT2 model with additional contextual variables classified into Performance Expectancy, Effort Expectancy, Social Influence (societal pressure regarding body image), Facilitating Conditions, Hedonic Motivation (to be split up into two concepts: perceived enjoyment and perceived device annoyance), Price value, and Habit-forming techniques; adding technology upgradability as determinants of consumers' behavioural intention and continuance usage of Information Technology (IT). Further, using personality traits theory and proposing relevant user-specific personal characteristics (openness to technological innovativeness, conscientiousness in health, extraversion, neuroticism, and agreeableness) to moderate the research model. Thus, the present thesis obtains a more convincing explanation expected to provide theoretical foundations for future emerging IT (such as wearable fitness devices) research from a behavioural perspective.

Keywords: wearable technology, wearable fitness devices/smartwatches, continuance use, behavioural intention, upgradability, longitudinal study

Procedia PDF Downloads 83
58 Photobleaching Kinetics and Epithelial Distribution of Hexylaminoleuilinate Induced PpIX in Rat Bladder Cancer

Authors: Sami El Khatib, Agnès Leroux, Jean-Louis Merlin, François Guillemin, Marie-Ange D’Hallewin

Abstract:

Photodynamic therapy (PDT) is a treatment modality based on the cytotoxic effect occurring on the target tissues by interaction of a photosensitizer with light in the presence of oxygen. One of the major advances in PDT can be attributed to the use of topical aminolevulinic (ALA) to induce Protoporphyrin IX (PpIX) for the treatment of early stage cancers as well as diagnosis. ALA is a precursor of the heme synthesis pathway. Locally delivered to the target tissue ALA overcomes the negative feedback exerted by heme and promotes the transient formation of PpIX in situ to reach critical effective levels in cells and tissue. Whereas early steps of the heme pathway occur in the cytosol, PpIX synthesis is shown to be held in the mitochondrial membranes and PpIX fluorescence is expected to accumulate in close vicinity of the initial building site and to progressively diffuse to the neighboring cytoplasmic compartment or other lipophylic organelles. PpIX is known to be highly reactive and will be degraded when irradiated with light. PpIX photobleaching is believed to be governed by a singlet oxygen mediated mechanism in the presence of oxidized amino acids and proteins. PpIX photobleaching and subsequent spectral phototransformation were described widely in tumor cells incubated in vitro with ALA solution, or ex vivo in human and porcine mucosa superfused with hexylaminolevulinate (hALA). PpIX photobleaching was also studied in vivo, using animal models such as normal or tumor mice skin and orthotopic rat bladder model. Hexyl aminolevulinate a more potent lipophilic derivative of ALA was proposed as an adjunct to standard cystoscopy in the fluorescence diagnosis of bladder cancer and other malignancies. We have previously reported the effectiveness of hALA mediated PDT of rat bladder cancer. Although normal and tumor bladder epithelium exhibit similar fluorescence intensities after intravesical instillation of two hALA concentrations (8 and 16 mM), the therapeutic response at 8mM and 20J/cm2 was completely different from the one observed at 16mM irradiated with the same light dose. Where the tumor is destroyed, leaving the underlying submucosa and muscle intact after an 8 mM instillation, 16mM sensitization and subsequent illumination results in the complete destruction of the underlying bladder wall but leaves the tumor undamaged. The object of the current study is to try to unravel the underlying mechanism for this apparent contradiction. PpIX extraction showed identical amounts of photosensitizer in tumor bearing bladders at both concentrations. Photobleaching experiments revealed mono-exponential decay curves in both situations but with a two times faster decay constant in case of 16mM bladders. Fluorescence microscopy shows an identical fluorescence pattern for normal bladders at both concentrations and tumor bladders at 8mM with bright spots. Tumor bladders at 16 mM exhibit a more diffuse cytoplasmic fluorescence distribution. The different response to PDT with regard to the initial pro-drug concentration can thus be attributed to the different cellular localization.

Keywords: bladder cancer, hexyl-aminolevulinate, photobleaching, confocal fluorescence microscopy

Procedia PDF Downloads 379
57 Testing a Dose-Response Model of Intergenerational Transmission of Family Violence

Authors: Katherine Maurer

Abstract:

Background and purpose: Violence that occurs within families is a global social problem. Children who are victims or witness to family violence are at risk for many negative effects both proximally and distally. One of the most disconcerting long-term effects occurs when child victims become adult perpetrators: the intergenerational transmission of family violence (ITFV). Early identification of those children most at risk for ITFV is needed to inform interventions to prevent future family violence perpetration and victimization. Only about 25-30% of child family violence victims become perpetrators of adult family violence (either child abuse, partner abuse, or both). Prior research has primarily been conducted using dichotomous measures of exposure (yes; no) to predict ITFV, given the low incidence rate in community samples. It is often assumed that exposure to greater amounts of violence predicts greater risk of ITFV. However, no previous longitudinal study with a community sample has tested a dose-response model of exposure to physical child abuse and parental physical intimate partner violence (IPV) using count data of frequency and severity of violence to predict adult ITFV. The current study used advanced statistical methods to test if increased childhood exposure would predict greater risk of ITFV. Methods: The study utilized 3 panels of prospective data from a cohort of 15 year olds (N=338) from the Project on Human Development in Chicago Neighborhoods longitudinal study. The data were comprised of a stratified probability sample of seven ethnic/racial categories and three socio-economic status levels. Structural equation modeling was employed to test a hurdle regression model of dose-response to predict ITFV. A version of the Conflict Tactics Scale was used to measure physical violence victimization, witnessing parental IPV and young adult IPV perpetration and victimization. Results: Consistent with previous findings, past 12 months incidence rates severity and frequency of interpersonal violence were highly skewed. While rates of parental and young adult IPV were about 40%, an unusually high rate of physical child abuse (57%) was reported. The vast majority of a number of acts of violence, whether minor or severe, were in the 1-3 range in the past 12 months. Reported frequencies of more than 5 times in the past year were rare, with less than 10% of those reporting more than six acts of minor or severe physical violence. As expected, minor acts of violence were much more common than acts of severe violence. Overall, regression analyses were not significant for the dose-response model of ITFV. Conclusions and implications: The results of the dose-response model were not significant due to a lack of power in the final sample (N=338). Nonetheless, the value of the approach was confirmed for the future research given the bi-modal nature of the distributions which suggest that in the context of both child physical abuse and physical IPV, there are at least two classes when frequency of acts is considered. Taking frequency into account in predictive models may help to better understand the relationship of exposure to ITFV outcomes. Further testing using hurdle regression models is suggested.

Keywords: intergenerational transmission of family violence, physical child abuse, intimate partner violence, structural equation modeling

Procedia PDF Downloads 218
56 Enhanced Dielectric and Ferroelectric Properties in Holmium Substituted Stoichiometric and Non-Stoichiometric SBT Ferroelectric Ceramics

Authors: Sugandha Gupta, Arun Kumar Jha

Abstract:

A large number of ferroelectric materials have been intensely investigated for applications in non-volatile ferroelectric random access memories (FeRAMs), piezoelectric transducers, actuators, pyroelectric sensors, high dielectric constant capacitors, etc. Bismuth layered ferroelectric materials such as Strontium Bismuth Tantalate (SBT) has attracted a lot of attention due to low leakage current, high remnant polarization and high fatigue endurance up to 1012 switching cycles. However, pure SBT suffers from various major limitations such as high dielectric loss, low remnant polarization values, high processing temperature, bismuth volatilization, etc. Significant efforts have been made to improve the dielectric and ferroelectric properties of this compound. Firstly, it has been reported that electrical properties vary with the Sr/ Bi content ratio in the SrBi2Ta2O9 compsition i.e. non-stoichiometric compositions with Sr-deficient / Bi excess content have higher remnant polarization values than stoichiometic SBT compositions. With the objective to improve structural, dielectric, ferroelectric and piezoelectric properties of SBT compound, rare earth holmium (Ho3+) was chosen as a donor cation for substitution onto the Bi2O2 layer. Moreover, hardly any report on holmium substitution in stoichiometric SrBi2Ta2O9 and non-stoichiometric Sr0.8Bi2.2Ta2O9 compositions were available in the literature. The holmium substituted SrBi2-xHoxTa2O9 (x= 0.00-2.0) and Sr0.8Bi2.2Ta2O9 (x=0.0 and 0.01) compositions were synthesized by the solid state reaction method. The synthesized specimens were characterized for their structural and electrical properties. X-ray diffractograms reveal single phase layered perovskite structure formation for holmium content in stoichiometric SBT samples up to x ≤ 0.1. The granular morphology of the samples was investigated using scanning electron microscope (Hitachi, S-3700 N). The dielectric measurements were carried out using a precision LCR meter (Agilent 4284A) operating at oscillation amplitude of 1V. The variation of dielectric constant with temperature shows that the Curie temperature (Tc) decreases on increasing the holmium content. The specimen with x=2.0 i.e. the bismuth free specimen, has very low dielectric constant and does not show any appreciable variation with temperature. The dielectric loss reduces significantly with holmium substitution. The polarization–electric field (P–E) hysteresis loops were recorded using a P–E loop tracer based on Sawyer–Tower circuit. It is observed that the ferroelectric property improve with Ho substitution. Holmium substituted specimen exhibits enhanced value of remnant polarization (Pr= 9.22 μC/cm²) as compared to holmium free specimen (Pr= 2.55 μC/cm²). Piezoelectric co-efficient (d33 values) was measured using a piezo meter system (Piezo Test PM300). It is observed that holmium substitution enhances piezoelectric coefficient. Further, the optimized holmium content (x=0.01) in stoichiometric SrBi2-xHoxTa2O9 composition has been substituted in non-stoichiometric Sr0.8Bi2.2Ta2O9 composition to obtain further enhanced structural and electrical characteristics. It is expected that a new class of ferroelectric materials i.e. Rare Earth Layered Structured Ferroelectrics (RLSF) derived from Bismuth Layered Structured Ferroelectrics (BLSF) will generate which can be used to replace static (SRAM) and dynamic (DRAM) random access memories with ferroelectric random access memories (FeRAMS).

Keywords: dielectrics, ferroelectrics, piezoelectrics, strontium bismuth tantalate

Procedia PDF Downloads 180
55 Trajectories of PTSD from 2-3 Years to 5-6 Years among Asian Americans after the World Trade Center Attack

Authors: Winnie Kung, Xinhua Liu, Debbie Huang, Patricia Kim, Keon Kim, Xiaoran Wang, Lawrence Yang

Abstract:

Considerable Asian Americans were exposed to the World Trade Center attack due to the proximity of the site to Chinatown and a sizeable number of South Asians working in the collapsed and damaged buildings nearby. Few studies focused on Asians in examining the disaster’s mental health impact, and even less longitudinal studies were reported beyond the first couple of years after the event. Based on the World Trade Center Health Registry, this study examined the trajectory of PTSD of individuals directly exposed to the attack from 2-3 to 5-6 years after the attack, comparing Asians against the non-Hispanic White group. Participants included 2,431 Asians and 31,455 Whites. Trajectories were delineated into the resilient, chronic, delayed-onset and remitted groups using PTSD checklist cut-off score at 44 at the 2 waves. Logistic regression analyses were conducted to compare the poorer trajectories against the resilient as a reference group, using predictors of baseline sociodemographic, exposure to the disaster, lower respiratory symptoms and previous depression/anxiety disorder diagnosis, and recruitment source as the control variable. Asians had significant lower socioeconomic status in terms of income, education and employment status compared to Whites. Over 3/4 of participants from both races were resilient, though slightly less for Asians than Whites (76.5% vs 79.8%). Asians had a higher proportion with chronic PTSD (8.6% vs 7.4%) and remission (5.9% vs 3.4%) than Whites. A considerable proportion of participants had delayed-onset in both races (9.1% Asians vs 9.4% Whites). The distribution of trajectories differed significantly by race (p<0.0001) with Asians faring poorer. For Asians, in the chronic vs resilient group, significant protective factors included age >65, annual household income >$50,000, and never married vs married/cohabiting; risk factors were direct disaster exposure, job loss due to 9/11, lost someone, and tangible loss; lower respiratory symptoms and previous mental disorder diagnoses. Similar protective and risk factors were noted for the delayed-onset group, except education being protective; and being an immigrant a risk. Between the 2 comparisons, the chronic group was more vulnerable than the delayed-onset as expected. It should also be noted that in both comparisons, Asians’ current employment status had no significant impact on their PTSD trajectory. Comparing between Asians against Whites, the direction of the relationships between the predictors and the PTSD trajectories were mostly the same, although more factors were significant for Whites than for Asians. A few factors showed significant racial difference: Higher risk for lower respiratory symptoms for Whites than Asians, higher risk for pre-9/11 mental disorder diagnosis for Asians than Whites, and immigrant a risk factor for the remitted vs resilient groups for Whites but not for Asians. Over 17% Asians still suffered from PTSD 5-6 years after the WTC attack signified its persistent impact which incurred substantial human, social and economic costs. The more disadvantaged socioeconomic status of Asians rendered them more vulnerable in their mental health trajectories relative to Whites. Together with their well-documented low tendency to seek mental health help, outreach effort to this population is needed to ensure follow-up treatment and prevention.

Keywords: PTSD, Asian Americans, World Trade Center Attack, racial differences

Procedia PDF Downloads 234
54 Digitization and Morphometric Characterization of Botanical Collection of Indian Arid Zones as Informatics Initiatives Addressing Conservation Issues in Climate Change Scenario

Authors: Dipankar Saha, J. P. Singh, C. B. Pandey

Abstract:

Indian Thar desert being the seventh largest in the world is the main hot sand desert occupies nearly 385,000km2 and about 9% of the area of the country harbours several species likely the flora of 682 species (63 introduced species) belonging to 352 genera and 87 families. The degree of endemism of plant species in the Thar desert is 6.4 percent, which is relatively higher than the degree of endemism in the Sahara desert which is very significant for the conservationist to envisage. The advent and development of computer technology for digitization and data base management coupled with the rapidly increasing importance of biodiversity conservation resulted in the invention of biodiversity informatics as discipline of basic sciences with multiple applications. Aichi Target 19 as an outcome of Convention of Biological Diversity (CBD) specifically mandates the development of an advanced and shared biodiversity knowledge base. Information on species distributions in space is the crux of effective management of biodiversity in the rapidly changing world. The efficiency of biodiversity management is being increased rapidly by various stakeholders like researchers, policymakers, and funding agencies with the knowledge and application of biodiversity informatics. Herbarium specimens being a vital repository for biodiversity conservation especially in climate change scenario the digitization process usually aims to improve access and to preserve delicate specimens and in doing so creating large sets of images as a part of the existing repository as arid plant information facility for long-term future usage. As the leaf characters are important for describing taxa and distinguishing between them and they can be measured from herbarium specimens as well. As a part of this activity, laminar characterization (leaves being the most important characters in assessing climate change impact) initially resulted in classification of more than thousands collections belonging to ten families like Acanthaceae, Aizoaceae, Amaranthaceae, Asclepiadaceae, Anacardeaceae, Apocynaceae, Asteraceae, Aristolochiaceae, Berseraceae and Bignoniaceae etc. Taxonomic diversity indices has also been worked out being one of the important domain of biodiversity informatics approaches. The digitization process also encompasses workflows which incorporate automated systems to enable us to expand and speed up the digitisation process. The digitisation workflows used to be on a modular system which has the potential to be scaled up. As they are being developed with a geo-referencing tool and additional quality control elements and finally placing specimen images and data into a fully searchable, web-accessible database. Our effort in this paper is to elucidate the role of BIs, present effort of database development of the existing botanical collection of institute repository. This effort is expected to be considered as a part of various global initiatives having an effective biodiversity information facility. This will enable access to plant biodiversity data that are fit-for-use by scientists and decision makers working on biodiversity conservation and sustainable development in the region and iso-climatic situation of the world.

Keywords: biodiversity informatics, climate change, digitization, herbarium, laminar characters, web accessible interface

Procedia PDF Downloads 203
53 Impact of Ocean Acidification on Gene Expression Dynamics during Development of the Sea Urchin Species Heliocidaris erythrogramma

Authors: Hannah R. Devens, Phillip L. Davidson, Dione Deaker, Kathryn E. Smith, Gregory A. Wray, Maria Byrne

Abstract:

Marine invertebrate species with calcifying larvae are especially vulnerable to ocean acidification (OA) caused by rising atmospheric CO₂ levels. Acidic conditions can delay development, suppress metabolism, and decrease the availability of carbonate ions in the ocean environment for skeletogenesis. These stresses often result in increased larval mortality, which may lead to significant ecological consequences including alterations to the larval settlement, population distribution, and genetic connectivity. Importantly, many of these physiological and developmental effects are caused by genetic and molecular level changes. Although many studies have examined the effect of near-future oceanic pH levels on gene expression in marine invertebrates, little is known about the impact of OA on gene expression in a developmental context. Here, we performed mRNA-sequencing to investigate the impact of environmental acidity on gene expression across three developmental stages in the sea urchin Heliocidaris erythrogramma. We collected RNA from gastrula, early larva, and 1-day post-metamorphic juvenile sea urchins cultured at present-day and predicted future oceanic pH levels (pH 8.1 and 7.7, respectively). We assembled an annotated reference transcriptome encompassing development from egg to ten days post-metamorphosis by combining these data with datasets from two previous developmental transcriptomic studies of H. erythrogramma. Differential gene expression and time course analyses between pH conditions revealed significant alterations to developmental transcription that are potentially associated with pH stress. Consistent with previous investigations, genes involved in biomineralization and ion transport were significantly upregulated under acidic conditions. Differences in gene expression between the two pH conditions became more pronounced post-metamorphosis, suggesting a development-dependent effect of OA on gene expression. Furthermore, many differences in gene expression later in development appeared to be a result of broad downregulation at pH 7.7: of 539 genes differentially expressed at the juvenile stage, 519 of these were lower in the acidic condition. Time course comparisons between pH 8.1 and 7.7 samples also demonstrated over 500 genes were more lowly expressed in pH 7.7 samples throughout development. Of the genes exhibiting stage-dependent expression level changes, over 15% of these diverged from the expected temporal pattern of expression in the acidic condition. Through these analyses, we identify novel candidate genes involved in development, metabolism, and transcriptional regulation that are possibly affected by pH stress. Our results demonstrate that pH stress significantly alters gene expression dynamics throughout development. A large number of genes differentially expressed between pH conditions in juveniles relative to earlier stages may be attributed to the effects of acidity on transcriptional regulation, as a greater proportion of mRNA at this later stage has been nascent transcribed rather than maternally loaded. Also, the overall downregulation of many genes in the acidic condition suggests that OA-induced developmental delay manifests as suppressed mRNA expression, possibly from lower transcription rates or increased mRNA degradation in the acidic environment. Further studies will be necessary to determine in greater detail the extent of OA effects on early developing marine invertebrates.

Keywords: development, gene expression, ocean acidification, RNA-sequencing, sea urchins

Procedia PDF Downloads 130
52 Differential Survival Rates of Pseudomonas aeruginosa Strains on the Wings of Pantala flavescens

Authors: Banu Pradheepa Kamarajan, Muthusamy Ananthasubramanian

Abstract:

Biofilm forming Pseudomonads occupy the top third position in causing hospital acquired infections. P. aeruginosa is notoriously known for its tendency to develop drug resistance. Major classes of drug such as β-lactams, aminoglycosides, quinolones, and polymyxins are found ineffective against multi-drug resistance Pseudomonas. To combat the infections, rather than administration of a single antibiotic, use of combinations (tobramycin and essential oils from plants and/or silver nanoparticles, chitosan, nitric oxide, cis-2-decenoic acid) in single formulation are suggested to control P. aeruginosa biofilms. Conventional techniques to prevent hospital-acquired implant infections such as coatings with antibiotics, controlled release of antibiotics from the implant material, contact-killing surfaces, coating the implants with functional DNase I and, coating with glycoside hydrolase are being followed. Coatings with bioactive components besides having limited shelf-life, require cold-chain and, are likely to fail when bacteria develop resistance. Recently identified nano-scale physical architectures on the insect wings are expected to have potential bactericidal property. Nanopillars are bactericidal to Staphylococcus aureus, Bacillus subtilis, K. pnuemoniae and few species of Pseudomonas. Our study aims to investigate the survival rate of biofilm forming Pseudomonas aeruginosa strain over non-biofilm forming strain on the nanopillar architecture of dragonfly (Pantala flavescens) wing. Dragonflies were collected near house-hold areas and, insect identification was carried out by the Department of Entomology, Tamilnadu Agricultural University, Coimbatore, India. Two strains of P. aeruginosa such as PAO1 (potent biofilm former) and MTCC 1688 (non-weak biofilm former) were tested against the glass coverslip (control) and wings of dragonfly (test) for 48 h. The wings/glass coverslips were incubated with bacterial suspension in 48-well plate. The plates were incubated at 37 °C under static condition. Bacterial attachment on the nanopillar architecture of the wing surface was visualized using FESEM. The survival rate of P. aeruginosa was tested using colony counting technique and flow cytometry at 0.5 h, 1 h, 2 h, 7 h, 24 h, and 48 h post-incubation. Cell death was analyzed using propidium iodide staining and DNA quantification. The results indicated that the survival rate of non-biofilm forming P. aeruginosa is 0.2 %, whilst that of biofilm former is 45 % on the dragonfly wings at the end of 48 h. The reduction in the survival rate of biofilm and non-biofilm forming P. aeruginosa was 20% and 40% respectively on the wings compared to the glass coverslip. In addition, Fourier Transformed Infrared Radiation was used to study the modification in the surface chemical composition of the wing during bacterial attachment and, post-sonication. This result indicated that the chemical moieties are not involved in the bactericidal property of nanopillars by the conserved characteristic peaks of chitin pre and post-sonication. The nanopillar architecture of the dragonfly wing efficiently deters the survival of non-biofilm forming P. aeruginosa, but not the biofilm forming strain. The study highlights the ability of biofilm formers to survive on wing architecture. Understanding this survival strategy will help in designing the architecture that combats the colonization of biofilm forming pathogens.

Keywords: biofilm, nanopillars, Pseudomonas aeruginosa, survival rate

Procedia PDF Downloads 153
51 Degradation of Diclofenac in Water Using FeO-Based Catalytic Ozonation in a Modified Flotation Cell

Authors: Miguel A. Figueroa, José A. Lara-Ramos, Miguel A. Mueses

Abstract:

Pharmaceutical residues are a section of emerging contaminants of anthropogenic origin that are present in a myriad of waters with which human beings interact daily and are starting to affect the ecosystem directly. Conventional waste-water treatment systems are not capable of degrading these pharmaceutical effluents because their designs cannot handle the intermediate products and biological effects occurring during its treatment. That is why it is necessary to hybridize conventional waste-water systems with non-conventional processes. In the specific case of an ozonation process, its efficiency highly depends on a perfect dispersion of ozone, long times of interaction of the gas-liquid phases and the size of the ozone bubbles formed through-out the reaction system. In order to increase the efficiency of these parameters, the use of a modified flotation cell has been proposed recently as a reactive system, which is used at an industrial level to facilitate the suspension of particles and spreading gas bubbles through the reactor volume at a high rate. The objective of the present work is the development of a mathematical model that can closely predict the kinetic rates of reactions taking place in the flotation cell at an experimental scale by means of identifying proper reaction mechanisms that take into account the modified chemical and hydrodynamic factors in the FeO-catalyzed Ozonation of Diclofenac aqueous solutions in a flotation cell. The methodology is comprised of three steps: an experimental phase where a modified flotation cell reactor is used to analyze the effects of ozone concentration and loading catalyst over the degradation of Diclofenac aqueous solutions. The performance is evaluated through an index of utilized ozone, which relates the amount of ozone supplied to the system per milligram of degraded pollutant. Next, a theoretical phase where the reaction mechanisms taking place during the experiments must be identified and proposed that details the multiple direct and indirect reactions the system goes through. Finally, a kinetic model is obtained that can mathematically represent the reaction mechanisms with adjustable parameters that can be fitted to the experimental results and give the model a proper physical meaning. The expected results are a robust reaction rate law that can simulate the improved results of Diclofenac mineralization on water using the modified flotation cell reactor. By means of this methodology, the following results were obtained: A robust reaction pathways mechanism showcasing the intermediates, free-radicals and products of the reaction, Optimal values of reaction rate constants that simulated Hatta numbers lower than 3 for the system modeled, degradation percentages of 100%, TOC (Total organic carbon) removal percentage of 69.9 only requiring an optimal value of FeO catalyst of 0.3 g/L. These results showed that a flotation cell could be used as a reactor in ozonation, catalytic ozonation and photocatalytic ozonation processes, since it produces high reaction rate constants and reduces mass transfer limitations (Ha > 3) by producing microbubbles and maintaining a good catalyst distribution.

Keywords: advanced oxidation technologies, iron oxide, emergent contaminants, AOTS intensification

Procedia PDF Downloads 87
50 Impact of Primary Care Telemedicine Consultations On Health Care Resource Utilisation: A Systematic Review

Authors: Anastasia Constantinou, Stephen Morris

Abstract:

Background: The adoption of synchronous and asynchronous telemedicine modalities for primary care consultations has exponentially increased since the COVID-19 pandemic. However, there is limited understanding of how virtual consultations influence healthcare resource utilization and other quality measures including safety, timeliness, efficiency, patient and provider satisfaction, cost-effectiveness and environmental impact. Aim: Quantify the rate of follow-up visits, emergency department visits, hospitalizations, request for investigations and prescriptions and comment on the effect on different quality measures associated with different telemedicine modalities used for primary care services and primary care referrals to secondary care Design and setting: Systematic review in primary care Methods: A systematic search was carried out across three databases (Medline, PubMed and Scopus) between August and November 2023, using terms related to telemedicine, general practice, electronic referrals, follow-up, use and efficiency and supported by citation searching. This was followed by screening according to pre-defined criteria, data extraction and critical appraisal. Narrative synthesis and metanalysis of quantitative data was used to summarize findings. Results: The search identified 2230 studies; 50 studies are included in this review. There was a prevalence of asynchronous modalities in both primary care services (68%) and referrals from primary care to secondary care (83%), and most of the study participants were females (63.3%), with mean age of 48.2. The average follow-up for virtual consultations in primary care was 28.4% (eVisits: 36.8%, secure messages 18.7%, videoconference 23.5%) with no significant difference between them or F2F consultations. There was an average annual reduction of primary care visits by 0.09/patient, an increase in telephone visits by 0.20/patient, an increase in ED encounters by 0.011/patient, an increase in hospitalizations by 0.02/patient and an increase in out of hours visits by 0.019/patient. Laboratory testing was requested on average for 10.9% of telemedicine patients, imaging or procedures for 5.6% and prescriptions for 58.7% of patients. When looking at referrals to secondary care, on average 36.7% of virtual referrals required follow-up visit, with the average rate of follow-up for electronic referrals being higher than for videoconferencing (39.2% vs 23%, p=0.167). Technical failures were reported on average for 1.4% of virtual consultations to primary care. When using carbon footprint estimates, we calculate that the use of telemedicine in primary care services can potentially provide a net decrease in carbon footprint by 0.592kgCO2/patient/year. When follow-up rates are taken into account, we estimate that virtual consultations reduce carbon footprint for primary care services by 2.3 times, and for secondary care referrals by 2.2 times. No major concerns regarding quality of care, or patient satisfaction were identified. 5/7 studies that addressed cost-effectiveness, reported increased savings. Conclusions: Telemedicine provides quality, cost-effective, and environmentally sustainable care for patients in primary care with inconclusive evidence regarding the rates of subsequent healthcare utilization. The evidence is limited by heterogeneous, small-scale studies and lack of prospective comparative studies. Further research to identify the most appropriate telemedicine modality for different patient populations, clinical presentations, service provision (e.g. used to follow-up patients instead of initial diagnosis) as well as further education for patients and providers alike on how to make best use of this service is expected to improve outcomes and influence practice.

Keywords: telemedicine, healthcare utilisation, digital interventions, environmental impact, sustainable healthcare

Procedia PDF Downloads 35
49 Application of the Pattern Method to Form the Stable Neural Structures in the Learning Process as a Way of Solving Modern Problems in Education

Authors: Liudmyla Vesper

Abstract:

The problems of modern education are large-scale and diverse. The aspirations of parents, teachers, and experts converge - everyone interested in growing up a generation of whole, well-educated persons. Both the family and society are expected in the future generation to be self-sufficient, desirable in the labor market, and capable of lifelong learning. Today's children have a powerful potential that is difficult to realize in the conditions of traditional school approaches. Focusing on STEM education in practice often ends with the simple use of computers and gadgets during class. "Science", "technology", "engineering" and "mathematics" are difficult to combine within school and university curricula, which have not changed much during the last 10 years. Solving the problems of modern education largely depends on teachers - innovators, teachers - practitioners who develop and implement effective educational methods and programs. Teachers who propose innovative pedagogical practices that allow students to master large-scale knowledge and apply it to the practical plane. Effective education considers the creation of stable neural structures during the learning process, which allow to preserve and increase knowledge throughout life. The author proposed a method of integrated lessons – cases based on the maths patterns for forming a holistic perception of the world. This method and program are scientifically substantiated and have more than 15 years of practical application experience in school and student classrooms. The first results of the practical application of the author's methodology and curriculum were announced at the International Conference "Teaching and Learning Strategies to Promote Elementary School Success", 2006, April 22-23, Yerevan, Armenia, IREX-administered 2004-2006 Multiple Component Education Project. This program is based on the concept of interdisciplinary connections and its implementation in the process of continuous learning. This allows students to save and increase knowledge throughout life according to a single pattern. The pattern principle stores information on different subjects according to one scheme (pattern), using long-term memory. This is how neural structures are created. The author also admits that a similar method can be successfully applied to the training of artificial intelligence neural networks. However, this assumption requires further research and verification. The educational method and program proposed by the author meet the modern requirements for education, which involves mastering various areas of knowledge, starting from an early age. This approach makes it possible to involve the child's cognitive potential as much as possible and direct it to the preservation and development of individual talents. According to the methodology, at the early stages of learning students understand the connection between school subjects (so-called "sciences" and "humanities") and in real life, apply the knowledge gained in practice. This approach allows students to realize their natural creative abilities and talents, which makes it easier to navigate professional choices and find their place in life.

Keywords: science education, maths education, AI, neuroplasticity, innovative education problem, creativity development, modern education problem

Procedia PDF Downloads 29
48 Skin-to-Skin Contact Simulation: Improving Health Outcomes for Medically Fragile Newborns in the Neonatal Intensive Care Unit

Authors: Gabriella Zarlenga, Martha L. Hall

Abstract:

Introduction: Premature infants are at risk for neurodevelopmental deficits and hospital readmissions, which can increase the financial burden on the health care system and families. Kangaroo care (skin-to-skin contact) is a practice that can improve preterm infant health outcomes. Preterm infants can acquire adequate body temperature, heartbeat, and breathing regulation through lying directly on the mother’s abdomen and in between her breasts. Due to some infant’s condition, kangaroo care is not a feasible intervention. The purpose of this proof-of-concept research project is to create a device which simulates skin-to-skin contact for pre-term infants not eligible for kangaroo care, with the aim of promoting baby’s health outcomes, reducing the incidence of serious neonatal and early childhood illnesses, and/or improving cognitive, social and emotional aspects of development. Methods: The study design is a proof-of-concept based on a three-phase approach; (1) observational study and data analysis of the standard of care for 2 groups of pre-term infants, (2) design and concept development of a novel device for pre-term infants not currently eligible for standard kangaroo care, and (3) prototyping, laboratory testing, and evaluation of the novel device in comparison to current assessment parameters of kangaroo care. A single center study will be conducted in an area hospital offering Level III neonatal intensive care. Eligible participants include newborns born premature (28-30 weeks of age) admitted to the NICU. The study design includes 2 groups: a control group receiving standard kangaroo care and an experimental group not eligible for kangaroo care. Based on behavioral analysis of observational video data collected in the NICU, the device will be created to simulate mother’s body using electrical components in a thermoplastic polymer housing covered in silicone. It will be designed with a microprocessor that controls simulated respiration, heartbeat, and body temperature of the 'simulated caregiver' by using a pneumatic lung, vibration sensors (heartbeat), pressure sensors (weight/position), and resistive film to measure temperature. A slight contour of the simulator surface may be integrated to help position the infant correctly. Control and monitoring of the skin-to-skin contact simulator would be performed locally by an integrated touchscreen. The unit would have built-in Wi-Fi connectivity as well as an optional Bluetooth connection in which the respiration and heart rate could be synced with a parent or caregiver. A camera would be integrated, allowing a video stream of the infant in the simulator to be streamed to a monitoring location. Findings: Expected outcomes are stabilization of respiratory and cardiac rates, thermoregulation of those infants not eligible for skin to skin contact with their mothers, and real time mother Bluetooth to the device to mimic the experience in the womb. Results of this study will benefit clinical practice by creating a new standard of care for premature neonates in the NICU that are deprived of skin to skin contact due to various health restrictions.

Keywords: kangaroo care, wearable technology, pre-term infants, medical design

Procedia PDF Downloads 137
47 In Vitro Intestine Tissue Model to Study the Impact of Plastic Particles

Authors: Ashleigh Williams

Abstract:

Micro- and nanoplastics’ (MNLPs) omnipresence and ecological accumulation is evident when surveying recent environmental impact studies. For example, in 2014 it was estimated that at least 52.3 trillion plastic microparticles are floating at sea, and scientists have even found plastics present remote Arctic ice and snow (5,6). Plastics have even found their way into precipitation, with more than 1000 tons of microplastic rain precipitating onto the Western United States in 2020. Even more recent studies evaluating the chemical safety of reusable plastic bottles found that hundreds of chemicals leached into the control liquid in the bottle (ddH2O, ph = 7) during a 24-hour time period. A consequence of the increased abundance in plastic waste in the air, land, and water every year is the bioaccumulation of MNLPs in ecosystems and trophic niches of the animal food chain, which could potentially cause increased direct and indirect exposure of humans to MNLPs via inhalation, ingestion, and dermal contact. Though the detrimental, toxic effects of MNLPs have been established in marine biota, much less is known about the potentially hazardous health effects of chronic MNLP ingestion in humans. Recent data indicate that long-term exposure to MNLPs could cause possible inflammatory and dysbiotic effects. However, toxicity seems to be largely dose-, as well as size-dependent. In addition, the transcytotic uptake of MNLPs through the intestinal epithelia in humans remain relatively unknown. To this point, the goal of the current study was to investigate the mechanisms of micro- and nanoplastic uptake and transcytosis of Polystyrene (PE) in human stem-cell derived, physiologically relevant in vitro intestinal model systems, and to compare the relative effect of particle size (30 nm, 100 nm, 500 nm and 1 µm), and concentration (0 µg/mL, 250 µg/mL, 500 µg/mL, 1000 µg/mL) on polystyrene MNLP uptake, transcytosis and intestinal epithelial model integrity. Observational and quantitative data obtained from confocal microscopy, immunostaining, transepithelial electrical resistance (TEER) measurements, cryosectioning, and ELISA cytokine assays of the proinflammatory cytokines Interleukin-6 and Interleukin-8 were used to evaluate the localization and transcytosis of polystyrene MNPs and its impact on epithelial integrity in human-derived intestinal in vitro model systems. The effect of Microfold (M) cell induction on polystyrene micro- and nanoparticle (MNP) uptake, transcytosis, and potential inflammation was also assessed and compared to samples grown under standard conditions. Microfold (M) cells, link the human intestinal system to the immune system and are the primary cells in the epithelium responsible for sampling and transporting foreign matter of interest from the lumen of the gut to underlying immune cells. Given the uptake capabilities of Microfold cells to interact both specifically and nonspecific to abiotic and biotic materials, it was expected that M- cell induced in vitro samples would have increased binding, localization, and potentially transcytosis of Polystyrene MNLPs across the epithelial barrier. Experimental results of this study would not only help in the evaluation of the plastic toxicity, but would allow for more detailed modeling of gut inflammation and the intestinal immune system.

Keywords: nanoplastics, enteroids, intestinal barrier, tissue engineering, microfold (M) cells

Procedia PDF Downloads 65
46 Variations in Spatial Learning and Memory across Natural Populations of Zebrafish, Danio rerio

Authors: Tamal Roy, Anuradha Bhat

Abstract:

Cognitive abilities aid fishes in foraging, avoiding predators & locating mates. Factors like predation pressure & habitat complexity govern learning & memory in fishes. This study aims to compare spatial learning & memory across four natural populations of zebrafish. Zebrafish, a small cyprinid inhabits a diverse range of freshwater habitats & this makes it amenable to studies investigating role of native environment in spatial cognitive abilities. Four populations were collected across India from waterbodies with contrasting ecological conditions. Habitat complexity of the water-bodies was evaluated as a combination of channel substrate diversity and diversity of vegetation. Experiments were conducted on populations under controlled laboratory conditions. A square shaped spatial testing arena (maze) was constructed for testing the performance of adult zebrafish. The square tank consisted of an inner square shaped layer with the edges connected to the diagonal ends of the tank-walls by connections thereby forming four separate chambers. Each of the four chambers had a main door in the centre. Each chamber had three sections separated by two windows. A removable coloured window-pane (red, yellow, green or blue) identified each main door. A food reward associated with an artificial plant was always placed inside the left-hand section of the red-door chamber. The position of food-reward and plant within the red-door chamber was fixed. A test fish would have to explore the maze by taking turns and locate the food inside the right-side section of the red-door chamber. Fishes were sorted from each population stock and kept individually in separate containers for identification. At a time, a test fish was released into the arena and allowed 20 minutes to explore in order to find the food-reward. In this way, individual fishes were trained through the maze to locate the food reward for eight consecutive days. The position of red door, with the plant and the reward, was shuffled every day. Following training, an intermission of four days was given during which the fishes were not subjected to trials. Post-intermission, the fishes were re-tested on the 13th day following the same protocol for their ability to remember the learnt task. Exploratory tendencies and latency of individuals to explore on 1st day of training, performance time across trials, and number of mistakes made each day were recorded. Additionally, mechanism used by individuals to solve the maze each day was analyzed across populations. Fishes could be expected to use algorithm (sequence of turns) or associative cues in locating the food reward. Individuals of populations did not differ significantly in latencies and tendencies to explore. No relationship was found between exploration and learning across populations. High habitat-complexity populations had higher rates of learning & stronger memory while low habitat-complexity populations had lower rates of learning and much reduced abilities to remember. High habitat-complexity populations used associative cues more than algorithm for learning and remembering while low habitat-complexity populations used both equally. The study, therefore, helped understand the role of natural ecology in explaining variations in spatial learning abilities across populations.

Keywords: algorithm, associative cue, habitat complexity, population, spatial learning

Procedia PDF Downloads 269
45 Ethical Considerations of Disagreements Between Clinicians and Artificial Intelligence Recommendations: A Scoping Review

Authors: Adiba Matin, Daniel Cabrera, Javiera Bellolio, Jasmine Stewart, Dana Gerberi (librarian), Nathan Cummins, Fernanda Bellolio

Abstract:

OBJECTIVES: Artificial intelligence (AI) tools are becoming more prevalent in healthcare settings, particularly for diagnostic and therapeutic recommendations, with an expected surge in the incoming years. The bedside use of this technology for clinicians opens the possibility of disagreements between the recommendations from AI algorithms and clinicians’ judgment. There is a paucity in the literature analyzing nature and possible outcomes of these potential conflicts, particularly related to ethical considerations. The goal of this scoping review is to identify, analyze and classify current themes and potential strategies addressing ethical conflicts originating from the conflict between AI and human recommendations. METHODS: A protocol was written prior to the initiation of the study. Relevant literature was searched by a medical librarian for the terms of artificial intelligence, healthcare and liability, ethics, or conflict. Search was run in 2021 in Ovid Cochrane Central Register of Controlled Trials, Embase, Medline, IEEE Xplore, Scopus, and Web of Science Core Collection. Articles describing the role of AI in healthcare that mentioned conflict between humans and AI were included in the primary search. Two investigators working independently and in duplicate screened titles and abstracts and reviewed full-text of potentially eligible studies. Data was abstracted into tables and reported by themes. We followed methodological guidelines for Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR). RESULTS: Of 6846 titles and abstracts, 225 full texts were selected, and 48 articles included in this review. 23 articles were included as original research and review papers. 25 were included as editorials and commentaries with similar themes. There was a lack of consensus in the included articles on who would be held liable for mistakes incurred by following AI recommendations. It appears that there is a dichotomy of the perceived ethical consequences depending on if the negative outcome is a result of a human versus AI conflict or secondary to a deviation from standard of care. Themes identified included transparency versus opacity of recommendations, data bias, liability of outcomes, regulatory framework, and the overall scope of artificial intelligence in healthcare. A relevant issue identified was the concern by clinicians of the “black box” nature of these recommendations and the ability to judge appropriateness of AI guidance. CONCLUSION AI clinical tools are being rapidly developed and adopted, and the use of this technology will create conflicts between AI algorithms and healthcare workers with various outcomes. In turn, these conflicts may have legal, and ethical considerations. There is limited consensus about the focus of ethical and liability for outcomes originated from disagreements. This scoping review identified the importance of framing the problem in terms of conflict between standard of care or not, and informed by the themes of transparency/opacity, data bias, legal liability, absent regulatory frameworks and understanding of the technology. Finally, limited recommendations to mitigate ethical conflicts between AI and humans have been identified. Further work is necessary in this field.

Keywords: ethics, artificial intelligence, emergency medicine, review

Procedia PDF Downloads 69
44 Soil Composition in Different Agricultural Crops under Application of Swine Wastewater

Authors: Ana Paula Almeida Castaldelli Maciel, Gabriela Medeiros, Amanda de Souza Machado, Maria Clara Pilatti, Ralpho Rinaldo dos Reis, Silvio Cesar Sampaio

Abstract:

Sustainable agricultural systems are crucial to ensuring global food security and the long-term production of nutritious food. Comprehensive soil and water management practices, including nutrient management, balanced fertilizer use, and appropriate waste management, are essential for sustainable agriculture. Swine wastewater (SWW) treatment has become a significant focus due to environmental concerns related to heavy metals, antibiotics, resistant pathogens, and nutrients. In South America, small farms use soil to dispose of animal waste, a practice that is expected to increase with global pork production. The potential of SWW as a nutrient source is promising, contributing to global food security, nutrient cycling, and mineral fertilizer reduction. Short- and long-term studies evaluated the effects of SWW on soil and plant parameters, such as nutrients, heavy metals, organic matter (OM), cation exchange capacity (CEC), and pH. Although promising results have been observed in short- and medium-term applications, long-term applications require more attention due to heavy metal concentrations. Organic soil amendment strategies, due to their economic and ecological benefits, are commonly used to reduce the bioavailability of heavy metals. However, the rate of degradation and initial levels of OM must be monitored to avoid changes in soil pH and release of metals. The study aimed to evaluate the long-term effects of SWW application on soil fertility parameters, focusing on calcium (Ca), magnesium (Mg), and potassium (K), in addition to CEC and OM. Experiments were conducted at the Universidade Estadual do Oeste do Paraná, Brazil, using 24 drainage lysimeters for nine years, with different application rates of SWW and mineral fertilization. Principal Component Analysis (PCA) was then conducted to summarize the composite variables, known as principal components (PC), and limit the dimensionality to be evaluated. The retained PCs were then correlated with the original variables to identify the level of association between each variable and each PC. Data were interpreted using Analysis of Variance - ANOVA for general linear models (GLM). As OM was not measured in the 2007 soybean experiment, it was assessed separately from PCA to avoid loss of information. PCA and ANOVA indicated that crop type, SWW, and mineral fertilization significantly influenced soil nutrient levels. Soybeans presented higher concentrations of Ca, Mg, and CEC. The application of SWW influenced K levels, with higher concentrations observed in SWW from biodigesters and higher doses of swine manure. Variability in nutrient concentrations in SWW due to factors such as animal age and feed composition makes standard recommendations challenging. OM levels increased in SWW-treated soils, improving soil fertility and structure. In conclusion, the application of SWW can increase soil fertility and crop productivity, reducing environmental risks. However, careful management and long-term monitoring are essential to optimize benefits and minimize adverse effects.

Keywords: contamination, water research, biodigester, nutrients

Procedia PDF Downloads 24