Search results for: transmission and distribution industries
953 Assessment of Environmental Quality of an Urban Setting
Authors: Namrata Khatri
Abstract:
The rapid growth of cities is transforming the urban environment and posing significant challenges for environmental quality. This study examines the urban environment of Belagavi in Karnataka, India, using geostatistical methods to assess the spatial pattern and land use distribution of the city and to evaluate the quality of the urban environment. The study is driven by the necessity to assess the environmental impact of urbanisation. Satellite data was utilised to derive information on land use and land cover. The investigation revealed that land use had changed significantly over time, with a drop in plant cover and an increase in built-up areas. High-resolution satellite data was also utilised to map the city's open areas and gardens. GIS-based research was used to assess public green space accessibility and to identify regions with inadequate waste management practises. The findings revealed that garbage collection and disposal techniques in specific areas of the city needed to be improved. Moreover, the study evaluated the city's thermal environment using Landsat 8 land surface temperature (LST) data. The investigation found that built-up regions had higher LST values than green areas, pointing to the city's urban heat island (UHI) impact. The study's conclusions have far-reaching ramifications for urban planners and politicians in Belgaum and other similar cities. The findings may be utilised to create sustainable urban planning strategies that address the environmental effect of urbanisation while also improving the quality of life for city dwellers. Satellite data and high-resolution satellite pictures were gathered for the study, and remote sensing and GIS tools were utilised to process and analyse the data. Ground truthing surveys were also carried out to confirm the accuracy of the remote sensing and GIS-based data. Overall, this study provides a complete assessment of Belgaum's environmental quality and emphasizes the potential of remote sensing and geographic information systems (GIS) approaches in environmental assessment and management.Keywords: environmental quality, UEQ, remote sensing, GIS
Procedia PDF Downloads 80952 Landslide Susceptibility Mapping Using Soft Computing in Amhara Saint
Authors: Semachew M. Kassa, Africa M Geremew, Tezera F. Azmatch, Nandyala Darga Kumar
Abstract:
Frequency ratio (FR) and analytical hierarchy process (AHP) methods are developed based on past landslide failure points to identify the landslide susceptibility mapping because landslides can seriously harm both the environment and society. However, it is still difficult to select the most efficient method and correctly identify the main driving factors for particular regions. In this study, we used fourteen landslide conditioning factors (LCFs) and five soft computing algorithms, including Random Forest (RF), Support Vector Machine (SVM), Logistic Regression (LR), Artificial Neural Network (ANN), and Naïve Bayes (NB), to predict the landslide susceptibility at 12.5 m spatial scale. The performance of the RF (F1-score: 0.88, AUC: 0.94), ANN (F1-score: 0.85, AUC: 0.92), and SVM (F1-score: 0.82, AUC: 0.86) methods was significantly better than the LR (F1-score: 0.75, AUC: 0.76) and NB (F1-score: 0.73, AUC: 0.75) method, according to the classification results based on inventory landslide points. The findings also showed that around 35% of the study region was made up of places with high and very high landslide risk (susceptibility greater than 0.5). The very high-risk locations were primarily found in the western and southeastern regions, and all five models showed good agreement and similar geographic distribution patterns in landslide susceptibility. The towns with the highest landslide risk include Amhara Saint Town's western part, the Northern part, and St. Gebreal Church villages, with mean susceptibility values greater than 0.5. However, rainfall, distance to road, and slope were typically among the top leading factors for most villages. The primary contributing factors to landslide vulnerability were slightly varied for the five models. Decision-makers and policy planners can use the information from our study to make informed decisions and establish policies. It also suggests that various places should take different safeguards to reduce or prevent serious damage from landslide events.Keywords: artificial neural network, logistic regression, landslide susceptibility, naïve Bayes, random forest, support vector machine
Procedia PDF Downloads 82951 A Study on Thermal and Flow Characteristics by Solar Radiation for Single-Span Greenhouse by Computational Fluid Dynamics Simulation
Authors: Jonghyuk Yoon, Hyoungwoon Song
Abstract:
Recently, there are lots of increasing interest in a smart farming that represents application of modern Information and Communication Technologies (ICT) into agriculture since it provides a methodology to optimize production efficiencies by managing growing conditions of crops automatically. In order to obtain high performance and stability for smart greenhouse, it is important to identify the effect of various working parameters such as capacity of ventilation fan, vent opening area and etc. In the present study, a 3-dimensional CFD (Computational Fluid Dynamics) simulation for single-span greenhouse was conducted using the commercial program, Ansys CFX 18.0. The numerical simulation for single-span greenhouse was implemented to figure out the internal thermal and flow characteristics. In order to numerically model solar radiation that spread over a wide range of wavelengths, the multiband model that discretizes the spectrum into finite bands of wavelength based on Wien’s law is applied to the simulation. In addition, absorption coefficient of vinyl varied with the wavelength bands is also applied based on Beer-Lambert Law. To validate the numerical method applied herein, the numerical results of the temperature at specific monitoring points were compared with the experimental data. The average error rates (12.2~14.2%) between them was shown and numerical results of temperature distribution are in good agreement with the experimental data. The results of the present study can be useful information for the design of various greenhouses. This work was supported by Korea Institute of Planning and Evaluation for Technology in Food, Agriculture, Forestry and Fisheries (IPET) through Advanced Production Technology Development Program, funded by Ministry of Agriculture, Food and Rural Affairs (MAFRA)(315093-03).Keywords: single-span greenhouse, CFD (computational fluid dynamics), solar radiation, multiband model, absorption coefficient
Procedia PDF Downloads 136950 Assessing the Mass Concentration of Microplastics and Nanoplastics in Wastewater Treatment Plants by Pyrolysis Gas Chromatography−Mass Spectrometry
Authors: Yanghui Xu, Qin Ou, Xintu Wang, Feng Hou, Peng Li, Jan Peter van der Hoek, Gang Liu
Abstract:
The level and removal of microplastics (MPs) in wastewater treatment plants (WWTPs) has been well evaluated by the particle number, while the mass concentration of MPs and especially nanoplastics (NPs) remains unclear. In this study, microfiltration, ultrafiltration and hydrogen peroxide digestion were used to extract MPs and NPs with different size ranges (0.01−1, 1−50, and 50−1000 μm) across the whole treatment schemes in two WWTPs. By identifying specific pyrolysis products, pyrolysis gas chromatography−mass spectrometry were used to quantify their mass concentrations of selected six types of polymers (i.e., polymethyl methacrylate (PMMA), polypropylene (PP), polystyrene (PS), polyethylene (PE), polyethylene terephthalate (PET), and polyamide (PA)). The mass concentrations of total MPs and NPs decreased from 26.23 and 11.28 μg/L in the influent to 1.75 and 0.71 μg/L in the effluent, with removal rates of 93.3 and 93.7% in plants A and B, respectively. Among them, PP, PET and PE were the dominant polymer types in wastewater, while PMMA, PS and PA only accounted for a small part. The mass concentrations of NPs (0.01−1 μm) were much lower than those of MPs (>1 μm), accounting for 12.0−17.9 and 5.6− 19.5% of the total MPs and NPs, respectively. Notably, the removal efficiency differed with the polymer type and size range. The low-density MPs (e.g., PP and PE) had lower removal efficiency than high-density PET in both plants. Since particles with smaller size could pass the tertiary sand filter or membrane filter more easily, the removal efficiency of NPs was lower than that of MPs with larger particle size. Based on annual wastewater effluent discharge, it is estimated that about 0.321 and 0.052 tons of MPs and NPs were released into the river each year. Overall, this study investigated the mass concentration of MPs and NPs with a wide size range of 0.01−1000 μm in wastewater, which provided valuable information regarding the pollution level and distribution characteristics of MPs, especially NPs, in WWTPs. However, there are limitations and uncertainties in the current study, especially regarding the sample collection and MP/NP detection. The used plastic items (e.g., sampling buckets, ultrafiltration membranes, centrifugal tubes, and pipette tips) may introduce potential contamination. Additionally, the proposed method caused loss of MPs, especially NPs, which can lead to underestimation of MPs/NPs. Further studies are recommended to address these challenges about MPs/NPs in wastewater.Keywords: microplastics, nanoplastics, mass concentration, WWTPs, Py-GC/MS
Procedia PDF Downloads 281949 Investigating the Atmospheric Phase Distribution of Inorganic Reactive Nitrogen Species along the Urban Transect of Indo Gangetic Plains
Authors: Reema Tiwari, U. C. Kulshrestha
Abstract:
As a key regulator of atmospheric oxidative capacity and secondary aerosol formations, the signatures of reactive nitrogen (Nr) emissions are becoming increasingly evident in the cascade of air pollution, acidification, and eutrophication of the ecosystem. However, their accurate estimates in N budget remains limited by the photochemical conversion processes where occurrence of differential atmospheric residence time of gaseous (NOₓ, HNO₃, NH₃) and particulate (NO₃⁻, NH₄⁺) Nr species becomes imperative to their spatio temporal evolution on a synoptic scale. The present study attempts to quantify such interactions under tropical conditions when low anticyclonic winds become favorable to the advections from west during winters. For this purpose, a diurnal sampling was conducted using low volume sampler assembly where ambient concentrations of Nr trace gases along with their ionic fractions in the aerosol samples were determined with UV-spectrophotometer and ion chromatography respectively. The results showed a spatial gradient of the gaseous precursors with a much pronounced inter site variability (p < 0.05) than their particulate fractions. Such observations were confirmed for their limited photochemical conversions where less than 1 ratios of day and night measurements (D/N) for the different Nr fractions suggested an influence of boundary layer dynamics at the background site. These phase conversion processes were further corroborated with the molar ratios of NOₓ/NOᵧ and NH₃/NHₓ where incomplete titrations of NOₓ and NH₃ emissions were observed irrespective of their diurnal phases along the sampling transect. Their calculations with equilibrium based approaches for an NH₃-HNO₃-NH₄NO₃ system, on the other hand, were characterized by delays in equilibrium attainment where plots of their below deliquescence Kₘ and Kₚ values with 1000/T confirmed the role of lower temperature ranges in NH₄NO₃ aerosol formation. These results would help us in not only resolving the changing atmospheric inputs of reduced (NH₃, NH₄⁺) and oxidized (NOₓ, HNO₃, NO₃⁻) Nr estimates but also in understanding the dependence of Nr mixing ratios on their local meteorological conditions.Keywords: diurnal ratios, gas-aerosol interactions, spatial gradient, thermodynamic equilibrium
Procedia PDF Downloads 128948 A Longitudinal Exploration into Computer-Mediated Communication Use (CMC) and Relationship Change between 2005-2018
Authors: Laurie Dempsey
Abstract:
Relationships are considered to be beneficial for emotional wellbeing, happiness and physical health. However, they are also complicated: individuals engage in a multitude of complex and volatile relationships during their lifetime, where the change to or ending of these dynamics can be deeply disruptive. As the internet is further integrated into everyday life and relationships are increasingly mediated, Media Studies’ and Sociology’s research interests intersect and converge. This study longitudinally explores how relationship change over time corresponds with the developing UK technological landscape between 2005-2018. Since the early 2000s, the use of computer-mediated communication (CMC) in the UK has dramatically reshaped interaction. Its use has compelled individuals to renegotiate how they consider their relationships: some argue it has allowed for vast networks to be accumulated and strengthened; others contend that it has eradicated the core values and norms associated with communication, damaging relationships. This research collaborated with UK media regulator Ofcom, utilising the longitudinal dataset from their Adult Media Lives study to explore how relationships and CMC use developed over time. This is a unique qualitative dataset covering 2005-2018, where the same 18 participants partook in annual in-home filmed depth interviews. The interviews’ raw video footage was examined year-on-year to consider how the same people changed their reported behaviour and outlooks towards their relationships, and how this coincided with CMC featuring more prominently in their everyday lives. Each interview was transcribed, thematically analysed and coded using NVivo 11 software. This study allowed for a comprehensive exploration into these individuals’ changing relationships over time, as participants grew older, experienced marriages or divorces, conceived and raised children, or lost loved ones. It found that as technology developed between 2005-2018, everyday CMC use was increasingly normalised and incorporated into relationship maintenance. It played a crucial role in altering relationship dynamics, even factoring in the breakdown of several ties. Three key relationships were identified as being shaped by CMC use: parent-child; extended family; and friendships. Over the years there were substantial instances of relationship conflict: for parents renegotiating their dynamic with their child as they tried to both restrict and encourage their child’s technology use; for estranged family members ‘forced’ together in the online sphere; and for friendships compelled to publicly display their relationship on social media, for fear of social exclusion. However, it was also evident that CMC acted as a crucial lifeline for these participants, providing opportunities to strengthen and maintain their bonds via previously unachievable means, both over time and distance. A longitudinal study of this length and nature utilising the same participants does not currently exist, thus provides crucial insight into how and why relationship dynamics alter over time. This unique and topical piece of research draws together Sociology and Media Studies, illustrating how the UK’s changing technological landscape can reshape one of the most basic human compulsions. This collaboration with Ofcom allows for insight that can be utilised in both academia and policymaking alike, making this research relevant and impactful across a range of academic fields and industries.Keywords: computer mediated communication, longitudinal research, personal relationships, qualitative data
Procedia PDF Downloads 121947 Knowledge of Risk Factors and Health Implications of Fast Food Consumption among Undergraduate in Nigerian Polytechnic
Authors: Adebusoye Michael, Anthony Gloria, Fasan Temitope, Jacob Anayo
Abstract:
Background: The culture of fast food consumption has gradually become a common lifestyle in Nigeria especially among young people in urban areas, in spite of the associated adverse health consequences. The adolescent pattern of fast foods consumption and their perception of this practice, as a risk factor for Non-Communicable Diseases (NCDs), have not been fully explored. This study was designed to assess fast food consumption pattern and the perception of it as a risk factor for NCDs among undergraduates of Federal Polytechnic, Bauchi. Methodology: The study was descriptive cross-sectional in design. One hundred and eighty-five students were recruited using systematic random sampling method from the two halls of residence. A structured questionnaire was used to assess the consumption pattern of fast foods. Data collected from the questionnaires were analysed using statistical package for the social sciences (SPSS) version 16. Simple descriptive statistics, such as frequency counts and percentages were used to interpret the data. Results: The age range of respondents was 18-34 years, 58.4% were males, 93.5% singles and 51.4% of their parents were employed. The majority (100%) were aware of fast foods and (75%) agreed to its implications as NCD. Fast foods consumption distribution included meat pie (4.9%), beef roll/ sausage (2.7%), egg roll (13.5%), doughnut (16.2%), noodles(18%) and carbonated drinks (3.8%). 30.3% consumed thrice in a week and 71% attached workload to high consumption of fast food. Conclusion: It was revealed that a higher social pressure from peers, time constraints, class pressure and school programme had the strong influence on high percentages of higher institutions’ students consume fast foods and therefore nutrition educational campaigns for campus food outlets or vendors and behavioural change communication on healthy nutrition and lifestyles among young people are hereby advocated.Keywords: fast food consumption, Nigerian polytechnic, risk factors, undergraduate
Procedia PDF Downloads 471946 Regenerating Habitats. A Housing Based on Modular Wooden Systems
Authors: Rui Pedro de Sousa Guimarães Ferreira, Carlos Alberto Maia Domínguez
Abstract:
Despite the ambitions to achieve climate neutrality by 2050, to fulfill the Paris Agreement's goals, the building and construction sector remains one of the most resource-intensive and greenhouse gas-emitting industries in the world, accounting for 40% of worldwide CO ₂ emissions. Over the past few decades, globalization and population growth have led to an exponential rise in demand in the housing market and, by extension, in the building industry. Considering this housing crisis, it is obvious that we will not stop building in the near future. However, the transition, which has already started, is challenging and complex because it calls for the worldwide participation of numerous organizations in altering how building systems, which have been a part of our everyday existence for over a century, are used. Wood is one of the alternatives that is most frequently used nowadays (under responsible forestry conditions) because of its physical qualities and, most importantly, because it produces fewer carbon emissions during manufacturing than steel or concrete. Furthermore, as wood retains its capacity to store CO ₂ after application and throughout the life of the building, working as a natural carbon filter, it helps to reduce greenhouse gas emissions. After a century-long focus on other materials, in the last few decades, technological advancements have made it possible to innovate systems centered around the use of wood. However, there are still some questions that require further exploration. It is necessary to standardize production and manufacturing processes based on prefabrication and modularization principles to achieve greater precision and optimization of the solutions, decreasing building time, prices, and waste from raw materials. In addition, this approach will make it possible to develop new architectural solutions to solve the rigidity and irreversibility of buildings, two of the most important issues facing housing today. Most current models are still created as inflexible, fixed, monofunctional structures that discourage any kind of regeneration, based on matrices that sustain the conventional family's traditional model and are founded on rigid, impenetrable compartmentalization. Adaptability and flexibility in housing are, and always have been, necessities and key components of architecture. People today need to constantly adapt to their surroundings and themselves because of the fast-paced, disposable, and quickly obsolescent nature of modern items. Migrations on a global scale, different kinds of co-housing, or even personal changes are some of the new questions that buildings have to answer. Designing with the reversibility of construction systems and materials in mind not only allows for the concept of "looping" in construction, with environmental advantages that enable the development of a circular economy in the sector but also unleashes multiple social benefits. In this sense, it is imperative to develop prefabricated and modular construction systems able to address the formalization of a reversible proposition that adjusts to the scale of time and its multiple reformulations, many of which are unpredictable. We must allow buildings to change, grow, or shrink over their lifetime, respecting their nature and, finally, the nature of the people living in them. It´s the ability to anticipate the unexpected, adapt to social factors, and take account of demographic shifts in society to stabilize communities, the foundation of real innovative sustainability.Keywords: modular, timber, flexibility, housing
Procedia PDF Downloads 79945 Teaching Children about Their Brains: Evaluating the Role of Neuroscience Undergraduates in Primary School Education
Authors: Clea Southall
Abstract:
Many children leave primary school having formed preconceptions about their relationship with science. Thus, primary school represents a critical window for stimulating scientific interest in younger children. Engagement relies on the provision of hands-on activities coupled with an ability to capture a child’s innate curiosity. This requires children to perceive science topics as interesting and relevant to their everyday life. Teachers and pupils alike have suggested the school curriculum be tailored to help stimulate scientific interest. Young children are naturally inquisitive about the human body; the brain is one topic which frequently engages pupils, although it is not currently included in the UK primary curriculum. Teaching children about the brain could have wider societal impacts such as increasing knowledge of neurological disorders. However, many primary school teachers do not receive formal neuroscience training and may feel apprehensive about delivering lessons on the nervous system. This is exacerbated by a lack of educational neuroscience resources. One solution is for undergraduates to form partnerships with schools - delivering engaging lessons and supplementing teacher knowledge. The aim of this project was to evaluate the success of a short lesson on the brain delivered by an undergraduate neuroscientist to primary school pupils. Prior to entering schools, semi-structured online interviews were conducted with teachers to gain pedagogical advice and relevant websites were searched for neuroscience resources. Subsequently, a single lesson plan was created comprising of four hands-on activities. The activities were devised in a top-down manner, beginning with learning about the brain as an entity, before focusing on individual neurons. Students were asked to label a ‘brain map’ to assess prior knowledge of brain structure and function. They viewed animal brains and created ‘pipe-cleaner neurons’ which were later used to depict electrical transmission. The same session was delivered by an undergraduate student to 570 key stage 2 (KS2) pupils across five schools in Leeds, UK. Post-session surveys, designed for teachers and pupils respectively, were used to evaluate the session. Children in all year groups had relatively poor knowledge of brain structure and function at the beginning of the session. When asked to label four brain regions with their respective functions, older pupils labeled a mean of 1.5 (± 1.0) brain regions compared to 0.8 (± 0.96) for younger pupils (p=0.002). However, by the end of the session, 95% of pupils felt their knowledge of the brain had increased. Hands-on activities were rated most popular by pupils and were considered the most successful aspect of the session by teachers. Although only half the teachers were aware of neuroscience educational resources, nearly all (95%) felt they would have more confidence in teaching a similar session in the future. All teachers felt the session was engaging and that the content could be linked to the current curriculum. Thus, a short fifty-minute session can successfully enhance pupils’ knowledge of a new topic: the brain. Partnerships with an undergraduate student can provide an alternative method for supplementing teacher knowledge, increasing their confidence in delivering future lessons on the nervous system.Keywords: education, neuroscience, primary school, undergraduate
Procedia PDF Downloads 211944 Potential of Polyphenols from Tamarix Gallica towards Common Pathological Features of Diabetes and Alzheimer’s Diseases
Authors: Asma Ben Hmidene, Mizuho Hanaki, Kazuma Murakami, Kazuhiro Irie, Hiroko Isoda, Hideyuki Shigemori
Abstract:
Type 2 diabetes mellitus (T2DM) and Alzheimer’s disease (AD) are characterized as a peripheral metabolic disorder and a degenerative disease of the central nervous system, respectively. It is now widely recognized that T2DM and AD share many pathophysiological features including glucose metabolism, increased oxidative stress and amyloid aggregation. Amyloid beta (Aβ) is the components of the amyloid deposits in the AD brain and while the component of the amyloidogenic peptide deposit in the pancreatic islets of Langerhans is identified as human islet amyloid polypeptide (hIAPP). These two proteins are originated from the amyloid precursor protein and have a high sequence similarity. Although the amino acid sequences of amyloidogenic proteins are diverse, they all adopt a similar structure in aggregates called cross-beta-spine. Add at that, extensive studies in the past years have found that like Aβ1-42, IAPP forms early intermediate assemblies as spherical oligomers, implicating that these oligomers possess a common folding pattern or conformation. These similarities can be used in the search for effective pharmacotherapy for DM, since potent therapeutic agents such as antioxidants with a catechol moiety, proved to inhibit Aβ aggregation, may play a key role in the inhibit the aggregation of hIAPP treatment of patients with DM. Tamarix gallica is one of the halophyte species having a powerful antioxidant system. Although it was traditionally used for the treatment of various liver metabolic disorders, there is no report about the use of this plant for the treatment or prevention of T2DM and AD. Therefore, the aim of this work is to investigate their protective effect towards T2DM and AD by isolation and identification of α-glucosidase inhibitors, with antioxidant potential, that play an important role in the glucose metabolism in diabetic patient, as well as, the polymerization of hIAPP and Aβ aggregation inhibitors. Structure-activity relationship study was conducted for both assays. And as for α-glucosidase inhibitors, their mechanism of action and their synergistic potential when applied with a very low concentration of acarbose were also suggesting that they can be used not only as α-glucosidase inhibitors but also be combined with established α-glucosidase inhibitors to reduce their adverse effect. The antioxidant potential of the purified substances was evaluated by DPPH and SOD assays. Th-T assay using 42-mer amyloid β-protein (Aβ42) for AD and hIAPP which is a 37-residue peptide secreted by the pancreatic β –cells for T2DM and Transmission electronic microscopy (TEM) were conducted to evaluate the amyloid aggragation of the actives substances. For α-glucosidase, p-NPG and glucose oxidase assays were performed for determining the inhibition potential and structure-activity relationship study. The Enzyme kinetic protocol was used to study the mechanism of action. From this research, it was concluded that polyphenols playing a role in the glucose metabolism and oxidative stress can also inhibit the amyloid aggregation, and that substances with a catechol and glucuronide moieties inhibiting amyloid-β aggregation, might be used to inhibit the aggregation of hIAPP.Keywords: α-glucosidase inhibitors, amyloid aggregation inhibition, mechanism of action, polyphenols, structure activity relationship, synergistic potential, tamarix gallica
Procedia PDF Downloads 279943 Superchaotropicity: Grafted Surface to Probe the Adsorption of Nano-Ions
Authors: Raimoana Frogier, Luc Girard, Pierre Bauduin, Diane Rebiscoul, Olivier Diat
Abstract:
Nano-ions (NIs) are ionic species or clusters of nanometric size. Their low charge density and the delocalization of their charges give special properties to some of NIs belonging to chemical classes of polyoxometalates (POMs) or boron clusters. They have the particularity of interacting non-covalently with neutral hydrated surface or interfaces such as assemblies of surface-active molecules (micelles, vesicles, lyotropic liquid crystals), foam bubbles or emulsion droplets. This makes possible to classify those NIs in the Hofmeister series as superchaotropic ions. The mechanism of adsorption is complex, linked to the simultaneous dehydration of the ion and the molecule or supramolecular assembly with which it can interact, all with an enthalpic gain on the free energy of the system. This interaction process is reversible and is sufficiently pronounced to induce changes in molecular and supramolecular shape or conformation, phase transitions in the liquid phase, all at sub-millimolar ionic concentrations. This new property of some NIs opens up new possibilities for applications in fields as varied as biochemistry for solubilization, recovery of metals of interest by foams in the form of NIs... In order to better understand the physico-chemical mechanisms at the origin of this interaction, we use silicon wafers functionalized by non-ionic oligomers (polyethylene glycol chains or PEG) to study in situ by X-ray reflectivity this interaction of NIs with the grafted chains. This study carried out at ESRF (European Synchrotron Radiation Facility) and has shown that the adsorption of the NIs, such as POMs, has a very fast kinetics. Moreover the distribution of the NIs in the grafted PEG chain layer was quantify. These results are very encouraging and confirm what has been observed on soft interfaces such as micelles or foams. The possibility to play on the density, length and chemical nature of the grafted chains makes this system an ideal tool to provide kinetic and thermodynamic information to decipher the complex mechanisms at the origin of this adsorption.Keywords: adsorption, nano-ions, solid-liquid interface, superchaotropicity
Procedia PDF Downloads 67942 Mapping Context, Roles, and Relations for Adjudicating Robot Ethics
Authors: Adam J. Bowen
Abstract:
Abstract— Should robots have rights or legal protections. Often debates concerning whether robots and AI should be afforded rights focus on conditions of personhood and the possibility of future advanced forms of AI satisfying particular intrinsic cognitive and moral attributes of rights-holding persons. Such discussions raise compelling questions about machine consciousness, autonomy, and value alignment with human interests. Although these are important theoretical concerns, especially from a future design perspective, they provide limited guidance for addressing the moral and legal standing of current and near-term AI that operate well below the cognitive and moral agency of human persons. Robots and AI are already being pressed into service in a wide range of roles, especially in healthcare and biomedical contexts. The design and large-scale implementation of robots in the context of core societal institutions like healthcare systems continues to rapidly develop. For example, we bring them into our homes, hospitals, and other care facilities to assist in care for the sick, disabled, elderly, children, or otherwise vulnerable persons. We enlist surgical robotic systems in precision tasks, albeit still human-in-the-loop technology controlled by surgeons. We also entrust them with social roles involving companionship and even assisting in intimate caregiving tasks (e.g., bathing, feeding, turning, medicine administration, monitoring, transporting). There have been advances to enable severely disabled persons to use robots to feed themselves or pilot robot avatars to work in service industries. As the applications for near-term AI increase and the roles of robots in restructuring our biomedical practices expand, we face pressing questions about the normative implications of human-robot interactions and collaborations in our collective worldmaking, as well as the moral and legal status of robots. This paper argues that robots operating in public and private spaces be afforded some protections as either moral patients or legal agents to establish prohibitions on robot abuse, misuse, and mistreatment. We already implement robots and embed them in our practices and institutions, which generates a host of human-to-machine and machine-to-machine relationships. As we interact with machines, whether in service contexts, medical assistance, or home health companions, these robots are first encountered in relationship to us and our respective roles in the encounter (e.g., surgeon, physical or occupational therapist, recipient of care, patient’s family, healthcare professional, stakeholder). This proposal aims to outline a framework for establishing limiting factors and determining the extent of moral or legal protections for robots. In doing so, it advocates for a relational approach that emphasizes the priority of mapping the complex contextually sensitive roles played and the relations in which humans and robots stand to guide policy determinations by relevant institutions and authorities. The relational approach must also be technically informed by the intended uses of the biomedical technologies in question, Design History Files, extensive risk assessments and hazard analyses, as well as use case social impact assessments.Keywords: biomedical robots, robot ethics, robot laws, human-robot interaction
Procedia PDF Downloads 120941 Genotypic and Allelic Distribution of Polymorphic Variants of Gene SLC47A1 Leu125Phe (rs77474263) and Gly64Asp (rs77630697) and Their Association to the Clinical Response to Metformin in Adult Pakistani T2DM Patients
Authors: Sadaf Moeez, Madiha Khalid, Zoya Khalid, Sania Shaheen, Sumbul Khalid
Abstract:
Background: Inter-individual variation in response to metformin, which has been considered as a first line therapy for T2DM treatment is considerable. In the current study, it was aimed to investigate the impact of two genetic variants Leu125Phe (rs77474263) and Gly64Asp (rs77630697) in gene SLC47A1 on the clinical efficacy of metformin in T2DM Pakistani patients. Methods: The study included 800 T2DM patients (400 metformin responders and 400 metformin non-responders) along with 400 ethnically matched healthy individuals. The genotypes were determined by allele-specific polymerase chain reaction. In-silico analysis was done to confirm the effect of the two SNPs on the structure of genes. Association was statistically determined using SPSS software. Results: Minor allele frequency for rs77474263 and rs77630697 was 0.13 and 0.12. For SLC47A1 rs77474263 the homozygotes of one mutant allele ‘T’ (CT) of rs77474263 variant were fewer in metformin responders than metformin non-responders (29.2% vs. 35.5 %). Likewise, the efficacy was further reduced (7.2% vs. 4.0 %) in homozygotes of two copies of ‘T’ allele (TT). Remarkably, T2DM cases with two copies of allele ‘C’ (CC) had 2.11 times more probability to respond towards metformin monotherapy. For SLC47A1 rs77630697 the homozygotes of one mutant allele ‘A’ (GA) of rs77630697 variant were fewer in metformin responders than metformin non-responders (33.5% vs. 43.0 %). Likewise, the efficacy was further reduced (8.5% vs. 4.5%) in homozygotes of two copies of ‘A’ allele (AA). Remarkably, T2DM cases with two copies of allele ‘G’ (GG) had 2.41 times more probability to respond towards metformin monotherapy. In-silico analysis revealed that these two variants affect the structure and stability of their corresponding proteins. Conclusion: The present data suggest that SLC47A1 Leu125Phe (rs77474263) and Gly64Asp (rs77630697) polymorphisms were associated with the therapeutic response of metformin in T2DM patients of Pakistan.Keywords: diabetes, T2DM, SLC47A1, Pakistan, polymorphism
Procedia PDF Downloads 159940 Development of PCL/Chitosan Core-Shell Electrospun Structures
Authors: Hilal T. Sasmazel, Seda Surucu
Abstract:
Skin tissue engineering is a promising field for the treatment of skin defects using scaffolds. This approach involves the use of living cells and biomaterials to restore, maintain, or regenerate tissues and organs in the body by providing; (i) larger surface area for cell attachment, (ii) proper porosity for cell colonization and cell to cell interaction, and (iii) 3-dimensionality at macroscopic scale. Recent studies on this area mainly focus on fabrication of scaffolds that can closely mimic the natural extracellular matrix (ECM) for creation of tissue specific niche-like environment at the subcellular scale. Scaffolds designed as ECM-like architectures incorporating into the host with minimal scarring/pain and facilitate angiogenesis. This study is related to combining of synthetic PCL and natural chitosan polymers to form 3D PCL/Chitosan core-shell structures for skin tissue engineering applications. Amongst the polymers used in tissue engineering, natural polymer chitosan and synthetic polymer poly(ε-caprolactone) (PCL) are widely preferred in the literature. Chitosan has been among researchers for a very long time because of its superior biocompatibility and structural resemblance to the glycosaminoglycan of bone tissue. However, the low mechanical flexibility and limited biodegradability properties reveals the necessity of using this polymer in a composite structure. On the other hand, PCL is a versatile polymer due to its low melting point (60°C), ease of processability, degradability with non-enzymatic processes (hydrolysis) and good mechanical properties. Nevertheless, there are also several disadvantages of PCL such as its hydrophobic structure, limited bio-interaction and susceptibility to bacterial biodegradation. Therefore, it became crucial to use both of these polymers together as a hybrid material in order to overcome the disadvantages of both polymers and combine advantages of those. The scaffolds here were fabricated by using electrospinning technique and the characterizations of the samples were done by contact angle (CA) measurements, scanning electron microscopy (SEM), transmission electron microscopy (TEM) and X-Ray Photoelectron spectroscopy (XPS). Additionally, gas permeability test, mechanical test, thickness measurement and PBS absorption and shrinkage tests were performed for all type of scaffolds (PCL, chitosan and PCL/chitosan core-shell). By using ImageJ launcher software program (USA) from SEM photographs the average inter-fiber diameter values were calculated as 0.717±0.198 µm for PCL, 0.660±0.070 µm for chitosan and 0.412±0.339 µm for PCL/chitosan core-shell structures. Additionally, the average inter-fiber pore size values exhibited decrease of 66.91% and 61.90% for the PCL and chitosan structures respectively, compare to PCL/chitosan core-shell structures. TEM images proved that homogenous and continuous bead free core-shell fibers were obtained. XPS analysis of the PCL/chitosan core-shell structures exhibited the characteristic peaks of PCL and chitosan polymers. Measured average gas permeability value of produced PCL/chitosan core-shell structure was determined 2315±3.4 g.m-2.day-1. In the future, cell-material interactions of those developed PCL/chitosan core-shell structures will be carried out with L929 ATCC CCL-1 mouse fibroblast cell line. Standard MTT assay and microscopic imaging methods will be used for the investigation of the cell attachment, proliferation and growth capacities of the developed materials.Keywords: chitosan, coaxial electrospinning, core-shell, PCL, tissue scaffold
Procedia PDF Downloads 481939 Prevalence and Associated Factors of Protein-Energy Malnutrition Among Children Aged 6-59 Months in Babile Town from April to June 2016
Authors: Tajudin Ahmed
Abstract:
Malnutrition is a significant problem in developing countries, particularly among children, due to inadequate diets, lack of proper care, and unequal distribution of food within households. High rates of malnutrition have been shown in Ethiopia, including stunting, underweight, and wasting. This study aims to assess the prevalence and associated factors of Protein-Energy Malnutrition (PEM) among children aged 6-59 months in Babile Town. The study utilized a community-based cross-sectional design conducted in Babile Town, Eastern Ethiopia. Two kebeles were randomly selected, and a census was conducted to identify eligible households. A total of 391 households with children aged 6-59 months were included in the study. Data was collected using structured questionnaires, and anthropometric measurements were taken to assess the weight and height of the children. The study found that a majority of the mothers (72.34%) and fathers (43%) had no formal education. Among the mothers who could read and write, a small percentage had completed primary (14%) or secondary (14%) education, and even fewer had higher education (2.7%). Similarly, among the fathers who could read and write, a majority had completed primary (46.15%) or secondary (27.22%) education, with smaller percentages completing preparatory (8.4%) or higher education (6.29%). The prevalence of malnutrition in the study area was high, with 38.85% of children experiencing stunting (8.2% severely stunted), 50.13% wasting (9% severely wasted), and 41.43% underweight (6.65% severely underweight). These findings indicate a significant burden of malnutrition in Babile Town, likely exacerbated by the high prevalence of infectious diseases such as diarrhea. The study concludes that the prevalence of malnutrition, particularly stunting, wasting, and underweight, is high in Babile Town. The findings indicate the urgent need for interventions to address malnutrition and improve nutrition and healthcare practices in the study area. These results can serve as a baseline for future studies and inform policymakers and healthcare providers in their efforts to combat childhood malnutrition.Keywords: protein-energy malnutrition, children 6-59 month age babble town, Marasmus
Procedia PDF Downloads 57938 The 10,000 Fold Effect of Retrograde Neurotransmission: A New Concept for Cerebral Palsy Revival by the Use of Nitric Oxide Donars
Authors: V. K. Tewari, M. Hussain, H. K. D. Gupta
Abstract:
Background: Nitric Oxide Donars (NODs) (intrathecal sodium nitroprusside (ITSNP) and oral tadalafil 20mg post ITSNP) has been studied in this context in cerebral palsy patients for fast recovery. This work proposes two mechanisms for acute cases and one mechanism for chronic cases, which are interrelated, for physiological recovery. a) Retrograde Neurotransmission (acute cases): 1) Normal excitatory impulse: at the synaptic level, glutamate activates NMDA receptors, with nitric oxide synthetase (NOS) on the postsynaptic membrane, for further propagation by the calcium-calmodulin complex. Nitric oxide (NO, produced by NOS) travels backward across the chemical synapse and binds the axon-terminal NO receptor/sGC of a presynaptic neuron, regulating anterograde neurotransmission (ANT) via retrograde neurotransmission (RNT). Heme is the ligand-binding site of the NO receptor/sGC. Heme exhibits > 10,000-fold higher affinity for NO than for oxygen (the 10,000-fold effect) and is completed in 20 msec. 2) Pathological conditions: normal synaptic activity, including both ANT and RNT, is absent. A NO donor (SNP) releases NO from NOS in the postsynaptic region. NO travels backward across a chemical synapse to bind to the heme of a NO receptor in the axon terminal of a presynaptic neuron, generating an impulse, as under normal conditions. b) Vasopasm: (acute cases) Perforators show vasospastic activity. NO vasodilates the perforators via the NO-cAMP pathway. c) Long-Term Potentiation (LTP): (chronic cases) The NO–cGMP-pathway plays a role in LTP at many synapses throughout the CNS and at the neuromuscular junction. LTP has been reviewed both generally and with respect to brain regions specific for memory/learning. Aims/Study Design: The principles of “generation of impulses from the presynaptic region to the postsynaptic region by very potent RNT (10,000-fold effect)” and “vasodilation of arteriolar perforators” are the basis of the authors’ hypothesis to treat cerebral palsy cases. Case-control prospective study. Materials and Methods: The experimental population included 82 cerebral palsy patients (10 patients were given control treatments without NOD or with 5% dextrose superfusion, and 72 patients comprised the NOD group). The mean time for superfusion was 5 months post-cerebral palsy. Pre- and post-NOD status was monitored by Gross Motor Function Classification System for Cerebral Palsy (GMFCS), MRI, and TCD studies. Results: After 7 days in the NOD group, the mean change in the GMFCS score was an increase of 1.2 points mean; after 3 months, there was an increase of 3.4 points mean, compared to the control-group increase of 0.1 points at 3 months. MRI and TCD documented the improvements. Conclusions: NOD (ITSNP boosts up the recovery and oral tadalafil maintains the recovery to a well-desired level) acts swiftly in the treatment of CP, acting within 7 days on 5 months post-cerebral palsy either of the three mechanisms.Keywords: cerebral palsy, intrathecal sodium nitroprusside, oral tadalafil, perforators, vasodilations, retrograde transmission, the 10, 000-fold effect, long-term potantiation
Procedia PDF Downloads 362937 Numerical Analysis of Heat Transfer in Water Channels of the Opposed-Piston Diesel Engine
Authors: Michal Bialy, Marcin Szlachetka, Mateusz Paszko
Abstract:
This paper discusses the CFD results of heat transfer in water channels in the engine body. The research engine was a newly designed Diesel combustion engine. The engine has three cylinders with three pairs of opposed pistons inside. The engine will be able to generate 100 kW mechanical power at a crankshaft speed of 3,800-4,000 rpm. The water channels are in the engine body along the axis of the three cylinders. These channels are around the three combustion chambers. The water channels transfer combustion heat that occurs the cylinders to the external radiator. This CFD research was based on the ANSYS Fluent software and aimed to optimize the geometry of the water channels. These channels should have a maximum flow of heat from the combustion chamber or the external radiator. Based on the parallel simulation research, the boundary and initial conditions enabled us to specify average values of key parameters for our numerical analysis. Our simulation used the average momentum equations and turbulence model k-epsilon double equation. There was also used a real k-epsilon model with a function of a standard wall. The turbulence intensity factor was 10%. The working fluid mass flow rate was calculated for a single typical value, specified in line with the research into the flow rate of automotive engine cooling pumps used in engines of similar power. The research uses a series of geometric models which differ, for instance, in the shape of the cross-section of the channel along the axis of the cylinder. The results are presented as colourful distribution maps of temperature, speed fields and heat flow through the cylinder walls. Due to limitations of space, our paper presents the results on the most representative geometric model only. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK ‘PZL-KALISZ’ S.A. and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.Keywords: Ansys fluent, combustion engine, computational fluid dynamics CFD, cooling system
Procedia PDF Downloads 219936 Development of R³ UV Exposure for the UV Dose-Insensitive and Cost-Effective Fabrication of Biodegradable Polymer Microneedles
Authors: Sungmin Park, Gyungmok Nam, Seungpyo Woo, Young Choi, Sangheon Park, Sang-Hee Yoon
Abstract:
Puncturing human skin with microneedles is critically important for microneedle-mediate drug delivery. Despite of extensive efforts in the past decades, the scale-up fabrication of sharp-tipped and high-aspect-ratio microneedles, especially made of biodegradable polymers, is still a long way off. Here, we present a UV dose insensitive and cost-effective microfabrication method for the biodegradable polymer microneedles with sharp tips and long lengths which can pierce human skin with low insertion force. The biodegradable polymer microneedles are fabricated with the polymer solution casting where a poly(lactic-co-glycolic acid) (PLGA, 50:50) solution is coated onto a SU-8 mold prepared with a reverse, ramped, and rotational (R3) UV exposure. The R3 UV exposure is modified from the multidirectional UV exposure both to suppress UV reflection from the bottom surface without anti-reflection layers and to optimize solvent concentration in the SU-8 photoresist, therefore achieving robust (i.e., highly insensitive to UV dose) and cost-effective fabrication of biodegradable polymer microneedles. An optical model for describing the spatial distribution of UV irradiation dose of the R3 UV exposure is also developed to theoretically predict the microneedle geometry fabricated with the R3 UV exposure and also to demonstrate the insensitiveness of microneedle geometry to UV dose. In the experimental characterization, the microneedles fabricated with the R3 UV exposure are compared with those fabricated with a conventional method (i.e., multidirectional UV exposure). The R3 UV exposure-based microfabrication reduces the end-tip radius by a factor of 5.8 and the deviation from ideal aspect ratio by 74.8%, compared with conventional method-based microfabrication. The PLGA microneedles fabricated with the R3 UV exposure pierce full-thickness porcine skins successfully and are demonstrated to completely dissolve in PBS (phosphate-buffered saline). The findings of this study will lead to an explosive growth of the microneedle-mediated drug delivery market.Keywords: R³ UV exposure, optical model, UV dose, reflection, solvent concentration, biodegradable polymer microneedle
Procedia PDF Downloads 166935 Thermodynamic Analysis of Surface Seawater under Ocean Warming: An Integrated Approach Combining Experimental Measurements, Theoretical Modeling, Machine Learning Techniques, and Molecular Dynamics Simulation for Climate Change Assessment
Authors: Nishaben Desai Dholakiya, Anirban Roy, Ranjan Dey
Abstract:
Understanding ocean thermodynamics has become increasingly critical as Earth's oceans serve as the primary planetary heat regulator, absorbing approximately 93% of excess heat energy from anthropogenic greenhouse gas emissions. This investigation presents a comprehensive analysis of Arabian Sea surface seawater thermodynamics, focusing specifically on heat capacity (Cp) and thermal expansion coefficient (α) - parameters fundamental to global heat distribution patterns. Through high-precision experimental measurements of ultrasonic velocity and density across varying temperature (293.15-318.15K) and salinity (0.5-35 ppt) conditions, it characterize critical thermophysical parameters including specific heat capacity, thermal expansion, and isobaric and isothermal compressibility coefficients in natural seawater systems. The study employs advanced machine learning frameworks - Random Forest, Gradient Booster, Stacked Ensemble Machine Learning (SEML), and AdaBoost - with SEML achieving exceptional accuracy (R² > 0.99) in heat capacity predictions. the findings reveal significant temperature-dependent molecular restructuring: enhanced thermal energy disrupts hydrogen-bonded networks and ion-water interactions, manifesting as decreased heat capacity with increasing temperature (negative ∂Cp/∂T). This mechanism creates a positive feedback loop where reduced heat absorption capacity potentially accelerates oceanic warming cycles. These quantitative insights into seawater thermodynamics provide crucial parametric inputs for climate models and evidence-based environmental policy formulation, particularly addressing the critical knowledge gap in thermal expansion behavior of seawater under varying temperature-salinity conditions.Keywords: climate change, arabian sea, thermodynamics, machine learning
Procedia PDF Downloads 6934 Detection of Temporal Change of Fishery and Island Activities by DNB and SAR on the South China Sea
Authors: I. Asanuma, T. Yamaguchi, J. Park, K. J. Mackin
Abstract:
Fishery lights on the surface could be detected by the Day and Night Band (DNB) of the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (Suomi-NPP). The DNB covers the spectral range of 500 to 900 nm and realized a higher sensitivity. The DNB has a difficulty of identification of fishing lights from lunar lights reflected by clouds, which affects observations for the half of the month. Fishery lights and lights of the surface are identified from lunar lights reflected by clouds by a method using the DNB and the infrared band, where the detection limits are defined as a function of the brightness temperature with a difference from the maximum temperature for each level of DNB radiance and with the contrast of DNB radiance against the background radiance. Fishery boats or structures on islands could be detected by the Synthetic Aperture Radar (SAR) on the polar orbit satellites using the reflected microwave by the surface reflecting targets. The SAR has a difficulty of tradeoff between spatial resolution and coverage while detecting the small targets like fishery boats. A distribution of fishery boats and island activities were detected by the scan-SAR narrow mode of Radarsat-2, which covers 300 km by 300 km with various combinations of polarizations. The fishing boats were detected as a single pixel of highly scattering targets with the scan-SAR narrow mode of which spatial resolution is 30 m. As the look angle dependent scattering signals exhibits the significant differences, the standard deviations of scattered signals for each look angles were taken into account as a threshold to identify the signal from fishing boats and structures on the island from background noise. It was difficult to validate the detected targets by DNB with SAR data because of time lag of observations for 6 hours between midnight by DNB and morning or evening by SAR. The temporal changes of island activities were detected as a change of mean intensity of DNB for circular area for a certain scale of activities. The increase of DNB mean intensity was corresponding to the beginning of dredging and the change of intensity indicated the ending of reclamation and following constructions of facilities.Keywords: day night band, SAR, fishery, South China Sea
Procedia PDF Downloads 235933 A Study on Genus Carolia Cantraine, 1838: A Case Study in Egypt with Special Emphasis on Paleobiogeographic, and Biometric Context
Authors: Soheir El-Shazly, Gouda Abdel-Gawad, Yasser Salama, Dina Sayed
Abstract:
Twelve species belonging to genus Carolia Cantraine, 1838 were recorded from nine localities in the Tertiary rocks of the Tethys, Atlantic and Eastern Pacific Provinces. During The Eocene two species were collected from Indian-Pakistani region, two from North Africa (Libya, Tunis and Algeria), one from Jamaica and two from Peru. The Oligocene shows its appearance in North America (Florida) and Argentina. The genus showed its last occurrence in the Miocene rocks of North America (Florida) before its extinction. In Egypt, the genus was diversified in the Eocene rocks and was represented by four species and two subspecies. The paleobiogeographic distribution of Genus Carolia Cantraine, 1838 indicates that it appeared in the Lower Eocene of West Indian Ocean and migrated westward flowing circumtropical Tethys Current to the central Tethyan province, where it appeared in North Africa and continued its dispersal westward to the Atlantic Ocean and arrived Jamaica in the Middle Eocene. It persisted in the Caribbean Sea and appeared later in the Oligocene and Miocene rocks of North America (Florida). Crossing Panama corridor, the genus migrated to the south Eastern Pacific Ocean and was collected from the Middle Eocene of Peru. The appearance of the genus in the Oligocene of the South Atlantic Coast of Argentina may be via South America Seaway or its southward migration from Central America to Austral Basin. The thickening of the upper valve of the genus, after the loss of its byssus to withstand the current action, caused inability of the animal to carry on its vital activity and caused its extinction. The biometric study of Carolia placunoides Cantraine, 1938 from thhe Eocene of Egypt, indicates that the distance between the muscle scars in the upper valve increases with the closure of the byssal notch.Keywords: Atlantic, carolia, paleobiogeography, tethys
Procedia PDF Downloads 358932 The Influence of Environmental Attributes on Children's Pedestrian-Crash Risk in School Zones
Authors: Jeongwoo Lee
Abstract:
Children are the most vulnerable travelers and they are at risk for pedestrian injury. Creating a safe route to school is important because walking to school is one of the main opportunities for promotion of needed physical exercise among children. This study examined how the built environmental attributes near an elementary school influence traffic accidents among school-aged children. The study used two complementary data sources including the locations of police-reported pedestrian crashes and the built environmental characteristics of school areas. The environmental attributes of road segments were collected through GIS measurements of local data and actual site audits using the inventory developed for measuring pedestrian-crash risk scores. The inventory data collected at 840 road segments near 32 elementary schools in the city of Ulsan. We observed all segments in a 300-meter-radius area from the entrance of an elementary school. Segments are street block faces. The inventory included 50 items, organized into four domains: accessibility (17items), pleasurability (11items), perceived safety from traffic (9items), and traffic and land-use measures (13items). Elementary schools were categorized into two groups based on the distribution of the pedestrian-crash hazard index scores. A high pedestrian-crash zone was defined as an school area within the eighth, ninth, and tenth deciles, while no pedestrian-crash zone was defined as a school zone with no pedestrian-crash accident among school-aged children between 2013 and 2016. No- and high pedestrian-crash zones were compared to determine whether different settings of the built environment near the school lead to a different rate of pedestrian-crash incidents. The results showed that a crash risk can be influenced by several environmental factors such as a shape of school-route, number of intersections, visibility and land-use in a street, and a type of sidewalk. The findings inform policy for creating safe routes to school to reduce the pedestrian-crash risk among children by focusing on school zones.Keywords: active school travel, school zone, pedestrian crash, safety route to school
Procedia PDF Downloads 245931 Development of Composition and Technology of Vincristine Nanoparticles Using High-Molecular Carbohydrates of Plant Origin
Authors: L. Ebralidze, A. Tsertsvadze, D. Berashvili, A. Bakuridze
Abstract:
Current cancer therapy strategies are based on surgery, radiotherapy and chemotherapy. The problems associated with chemotherapy are one of the biggest challenges for clinical medicine. These include: low specificity, broad spectrum of side effects, toxicity and development of cellular resistance. Therefore, anti-cance drugs need to be develop urgently. Particularly, in order to increase efficiency of anti-cancer drugs and reduce their side effects, scientists work on formulation of nano-drugs. The objective of this study was to develop composition and technology of vincristine nanoparticles using high-molecular carbohydrates of plant origin. Plant polysacharides, particularly, soy bean seed polysaccharides, flaxseed polysaccharides, citrus pectin, gum arabic, sodium alginate were used as objects. Based on biopharmaceutical research, vincristine containing nanoparticle formulations were prepared. High-energy emulsification and solvent evaporation methods were used for preparation of nanosystems. Polysorbat 80, polysorbat 60, sodium dodecyl sulfate, glycerol, polyvinyl alcohol were used in formulation as emulsifying agent and stabilizer of the system. The ratio of API and polysacharides, also the type of the stabilizing and emulsifying agents are very effective on the particle size of the final product. The influence of preparation technology, type and concentration of stabilizing agents on the properties of nanoparticles were evaluated. For the next stage of research, nanosystems were characterized. Physiochemical characterization of nanoparticles: their size, shape, distribution was performed using Atomic force microscope and Scanning electron microscope. The present study explored the possibility of production of NPs using plant polysaccharides. Optimal ratio of active pharmaceutical ingredient and plant polysacharids, the best stabilizer and emulsifying agent was determined. The average range of nanoparticles size and shape was visualized by SEM.Keywords: nanoparticles, target delivery, natural high molecule carbohydrates, surfactants
Procedia PDF Downloads 270930 Simulations to Predict Solar Energy Potential by ERA5 Application at North Africa
Authors: U. Ali Rahoma, Nabil Esawy, Fawzia Ibrahim Moursy, A. H. Hassan, Samy A. Khalil, Ashraf S. Khamees
Abstract:
The design of any solar energy conversion system requires the knowledge of solar radiation data obtained over a long period. Satellite data has been widely used to estimate solar energy where no ground observation of solar radiation is available, yet there are limitations on the temporal coverage of satellite data. Reanalysis is a “retrospective analysis” of the atmosphere parameters generated by assimilating observation data from various sources, including ground observation, satellites, ships, and aircraft observation with the output of NWP (Numerical Weather Prediction) models, to develop an exhaustive record of weather and climate parameters. The evaluation of the performance of reanalysis datasets (ERA-5) for North Africa against high-quality surface measured data was performed using statistical analysis. The estimation of global solar radiation (GSR) distribution over six different selected locations in North Africa during ten years from the period time 2011 to 2020. The root means square error (RMSE), mean bias error (MBE) and mean absolute error (MAE) of reanalysis data of solar radiation range from 0.079 to 0.222, 0.0145 to 0.198, and 0.055 to 0.178, respectively. The seasonal statistical analysis was performed to study seasonal variation of performance of datasets, which reveals the significant variation of errors in different seasons—the performance of the dataset changes by changing the temporal resolution of the data used for comparison. The monthly mean values of data show better performance, but the accuracy of data is compromised. The solar radiation data of ERA-5 is used for preliminary solar resource assessment and power estimation. The correlation coefficient (R2) varies from 0.93 to 99% for the different selected sites in North Africa in the present research. The goal of this research is to give a good representation for global solar radiation to help in solar energy application in all fields, and this can be done by using gridded data from European Centre for Medium-Range Weather Forecasts ECMWF and producing a new model to give a good result.Keywords: solar energy, solar radiation, ERA-5, potential energy
Procedia PDF Downloads 211929 Micromechanical Compatibility Between Cells and Scaffold Mediates the Efficacy of Regenerative Medicine
Authors: Li Yang, Yang Song, Martin Y. M. Chiang
Abstract:
Objective: To experimentally substantiate the micromechanical compatibility between cell and scaffold, in the regenerative medicine approach for restoring bone volume, is essential for phenotypic transitions Methods: Through nanotechnology and electrospinning process, nanofibrous scaffolds were fabricated to host dental follicle stem cells (DFSCs). Blends (50:50) of polycaprolactone (PCL) and silk fibroin (SF), mixed with various content of cellulose nanocrystals (CNC, up to 5% in weight), were electrospun to prepare nanofibrous scaffolds with heterogeneous microstructure in terms of fiber size. Colloidal probe atomic force microscopy (AFM) and conventional uniaxial tensile tests measured the scaffold stiffness at the micro-and macro-scale, respectively. The cell elastic modulus and cell-scaffold adhesive interaction (i.e., a chemical function) were examined through single-cell force spectroscopy using AFM. The quantitative reverse transcription-polymerase chain reaction (qRT-PCR) was used to determine if the mechanotransduction signal (i.e., Yap1, Wwr2, Rac1, MAPK8, Ptk2 and Wnt5a) is upregulated by the scaffold stiffness at the micro-scale (cellular scale). Results: The presence of CNC produces fibrous scaffolds with a bimodal distribution of fiber diameter. This structural heterogeneity, which is CNC-composition dependent, remarkably modulates the mechanical functionality of scaffolds at microscale and macroscale simultaneously, but not the chemical functionality (i.e., only a single material property is varied). In in vitro tests, the osteogenic differentiation and gene expression associated with mechano-sensitive cell markers correlate to the degree of micromechanical compatibility between DFSCs and the scaffold. Conclusion: Cells require compliant scaffolds to encourage energetically favorable interactions for mechanotransduction, which are converted into changes in cellular biochemistry to direct the phenotypic evolution. The micromechanical compatibility is indeed important to the efficacy of regenerative medicine.Keywords: phenotype transition, scaffold stiffness, electrospinning, cellulose nanocrystals, single-cell force spectroscopy
Procedia PDF Downloads 190928 Elastic Behaviour of Graphene Nanoplatelets Reinforced Epoxy Resin Composites
Authors: V. K. Srivastava
Abstract:
Graphene has recently attracted an increasing attention in nanocomposites applications because it has 200 times greater strength than steel, making it the strongest material ever tested. Graphene, as the fundamental two-dimensional (2D) carbon structure with exceptionally high crystal and electronic quality, has emerged as a rapidly rising star in the field of material science. Graphene, as defined, as a 2D crystal, is composed of monolayers of carbon atoms arranged in a honeycombed network with six-membered rings, which is the interest of both theoretical and experimental researchers worldwide. The name comes from graphite and alkene. Graphite itself consists of many graphite-sheets stacked together by weak van der Waals forces. This is attributed to the monolayer of carbon atoms densely packed into honeycomb structure. Due to superior inherent properties of graphene nanoplatelets (GnP) over other nanofillers, GnP particles were added in epoxy resin with the variation of weight percentage. It is indicated that the DMA results of storage modulus, loss modulus and tan δ, defined as the ratio of elastic modulus and imaginary (loss) modulus versus temperature were affected with addition of GnP in the epoxy resin. In epoxy resin, damping (tan δ) is usually caused by movement of the molecular chain. The tan δ of the graphene nanoplatelets/epoxy resin composite is much lower than that of epoxy resin alone. This finding suggests that addition of graphene nanoplatelets effectively impedes movement of the molecular chain. The decrease in storage modulus can be interpreted by an increasing susceptibility to agglomeration, leading to less energy dissipation in the system under viscoelastic deformation. The results indicates the tan δ increased with the increase of temperature, which confirms that tan δ is associated with magnetic field strength. Also, the results show that the nanohardness increases with increase of elastic modulus marginally. GnP filled epoxy resin gives higher value than the epoxy resin, because GnP improves the mechanical properties of epoxy resin. Debonding of GnP is clearly observed in the micrograph having agglomeration of fillers and inhomogeneous distribution. Therefore, DMA and nanohardness studies indiacte that the elastic modulus of epoxy resin is increased with the addition of GnP fillers.Keywords: agglomeration, elastic modulus, epoxy resin, graphene nanoplatelet, loss modulus, nanohardness, storage modulus
Procedia PDF Downloads 264927 Identification and Correlation of Structural Parameters and Gas Accumulation Capacity of Shales From Poland
Authors: Anna Pajdak, Mateusz Kudasik, Aleksandra Gajda, Katarzyna Kozieł
Abstract:
Shales are a type of fine-grained sedimentary rocks, which are composed of small grains of several to several dozen μm in size and consist of a variable mixture of clay minerals, quartz, feldspars, carbonates, sulphides, amorphous material and organic matter. The study involved an analysis of the basic physical properties of shale rocks from several research wells in Poland. The structural, sorption and seepage parameters of these rocks were determined. The total porosity of granular rock samples reached several percent, including the share of closed pores up to half a percent. The volume and distribution of pores, which are of significant importance in the context of the mechanisms of methane binding to the rock matrix and methods of stimulating its desorption and the possibility of CO₂ storage, were determined. The BET surface area of the samples ranged from a few to a dozen or so m²/g, and the share of micropores was dominant. In order to determine the interaction of rocks with gases, the sorption capacity in relation to CO₂ and CH₄ was determined at a pressure of 0-1.4 MPa. Sorption capacities, sorption isotherms and diffusion coefficients were also determined. Studies of competitive sorption of CO₂/CH₄ on shales showed a preference for CO₂ sorption over CH₄, and the selectivity of CO₂/CH₄ sorption decreased with increasing pressure. In addition to the pore structure, the adsorption capacity of gases in shale rocks is significantly influenced by the carbon content in their organic matter. The sorbed gas can constitute from 20% to 80% of the total gas contained in the shales. With the increasing depth of shale gas occurrence, the share of free gas to sorbed gas increases, among others, due to the increase in temperature and surrounding pressure. Determining the share of free gas to sorbed gas in shale, depending on the depth of its deposition, is one of the key elements of recognizing the gas/sorption exchange processes of CO₂/CH₄, which are the basis of CO₂-ESGR technology. The main objective of the work was to identify the correlation between different forms of gas occurrence in rocks and the parameters describing the pore space of shales.Keywords: shale, CH₄, CO₂, shale gas, CO₂ -ESGR, pores structure
Procedia PDF Downloads 10926 Qualitative Evaluation of the Morris Collection Conservation Project at the Sainsbury Centre of Visual Arts in the Context of Agile, Lean and Hybrid Project Management Approaches
Authors: Maria Ledinskaya
Abstract:
This paper examines the Morris Collection Conservation Project at the Sainsbury Centre for Visual Arts in the context of Agile, Lean, and Hybrid project management. It is part case study and part literature review. To date, relatively little has been written about non-traditional project management approaches in heritage conservation. This paper seeks to introduce Agile, Lean, and Hybrid project management concepts from business, software development, and manufacturing fields to museum conservation, by referencing their practical application on a recent museum-based conservation project. The Morris Collection Conservation Project was carried out in 2019-2021 in Norwich, UK, and concerned the remedial conservation of around 150 Abstract Constructivist artworks bequeathed to the Sainsbury Centre for Visual Arts by private collectors Michael and Joyce Morris. The first part introduces the chronological timeline and key elements of the project. It describes a medium-size conservation project of moderate complexity, which was planned and delivered in an environment with multiple known unknowns – unresearched collection, unknown condition and materials, unconfirmed budget. The project was also impacted by the unknown unknowns of the COVID-19 pandemic, such as indeterminate lockdowns, and the need to accommodate social distancing and remote communications. The author, a staff conservator at the Sainsbury Centre who acted as project manager on the Morris Collection Conservation Project, presents an incremental, iterative, and value-based approach to managing a conservation project in an uncertain environment. Subsequent sections examine the project from the point of view of Traditional, Agile, Lean, and Hybrid project management. The author argues that most academic writing on project management in conservation has focussed on a Traditional plan-driven approach – also known as Waterfall project management – which has significant drawbacks in today’s museum environment, due to its over-reliance on prediction-based planning and its low tolerance to change. In the last 20 years, alternative Agile, Lean and Hybrid approaches to project management have been widely adopted in software development, manufacturing, and other industries, although their recognition in the museum sector has been slow. Using examples from the Morris Collection Conservation Project, the author introduces key principles and tools of Agile, Lean, and Hybrid project management and presents a series of arguments on the effectiveness of these alternative methodologies in museum conservation, as well as the ethical and practical challenges to their implementation. These project management approaches are discussed in the context of consequentialist, relativist, and utilitarian developments in contemporary conservation ethics, particularly with respect to change management, bespoke ethics, shared decision-making, and value-based cost-benefit conservation strategy. The author concludes that the Morris Collection Conservation Project had multiple Agile and Lean features which were instrumental to the successful delivery of the project. These key features are identified as distributed decision making, a co-located cross-disciplinary team, servant leadership, focus on value-added work, flexible planning done in shorter sprint cycles, light documentation, and emphasis on reducing procedural, financial, and logistical waste. Overall, the author’s findings point largely in favour of a Hybrid model which combines traditional and alternative project processes and tools to suit the specific needs of the project.Keywords: project management, conservation, waterfall, agile, lean, hybrid
Procedia PDF Downloads 99925 Experimental Study on the Heating Characteristics of Transcritical CO₂ Heat Pumps
Authors: Lingxiao Yang, Xin Wang, Bo Xu, Zhenqian Chen
Abstract:
Due to its outstanding environmental performance, higher heating temperature and excellent low-temperature performance, transcritical carbon dioxide (CO₂) heat pumps are receiving more and more attention. However, improperly set operating parameters have a serious negative impact on the performance of the transcritical CO₂ heat pump due to the properties of CO₂. In this study, the heat transfer characteristics of the gas cooler are studied based on the modified “three-stage” gas cooler, then the effect of three operating parameters, compressor speed, gas cooler water-inlet flowrate and gas cooler water-inlet temperature, on the heating process of the system are investigated from the perspective of thermal quality and heat capacity. The results shows that: In the heat transfer process of gas cooler, the temperature distribution of CO₂ and water shows a typical “two region” and “three zone” pattern; The rise in the cooling pressure of CO₂ serves to increase the thermal quality on the CO₂ side of the gas cooler, which in turn improves the heating temperature of the system; Nevertheless, the elevated thermal quality on the CO₂ side can exacerbate the mismatch of heat capacity on both sides of the gas cooler, thereby adversely affecting the system coefficient of performance (COP); Furthermore, increasing compressor speed mitigates the mismatch in heat capacity caused by elevated thermal quality, which is exacerbated by decreasing gas cooler water-inlet flowrate and rising gas cooler water-inlet temperature; As a delegate, the varying compressor speed results in a 7.1°C increase in heating temperature within the experimental range, accompanied by a 10.01% decrease in COP and an 11.36% increase in heating capacity. This study can not only provide an important reference for the theoretical analysis and control strategy of the transcritical CO₂ heat pump, but also guide the related simulation and the design of the gas cooler. However, the range of experimental parameters in the current study is small and the conclusions drawn are not further analysed quantitatively. Therefore, expanding the range of parameters studied and proposing corresponding quantitative conclusions and indicators with universal applicability could greatly increase the practical applicability of this study. This is also the goal of our next research.Keywords: transcritical CO₂ heat pump, gas cooler, heat capacity, thermal quality
Procedia PDF Downloads 19924 Physics-Informed Neural Network for Predicting Strain Demand in Inelastic Pipes under Ground Movement with Geometric and Soil Resistance Nonlinearities
Authors: Pouya Taraghi, Yong Li, Nader Yoosef-Ghodsi, Muntaseer Kainat, Samer Adeeb
Abstract:
Buried pipelines play a crucial role in the transportation of energy products such as oil, gas, and various chemical fluids, ensuring their efficient and safe distribution. However, these pipelines are often susceptible to ground movements caused by geohazards like landslides, fault movements, lateral spreading, and more. Such ground movements can lead to strain-induced failures in pipes, resulting in leaks or explosions, leading to fires, financial losses, environmental contamination, and even loss of human life. Therefore, it is essential to study how buried pipelines respond when traversing geohazard-prone areas to assess the potential impact of ground movement on pipeline design. As such, this study introduces an approach called the Physics-Informed Neural Network (PINN) to predict the strain demand in inelastic pipes subjected to permanent ground displacement (PGD). This method uses a deep learning framework that does not require training data and makes it feasible to consider more realistic assumptions regarding existing nonlinearities. It leverages the underlying physics described by differential equations to approximate the solution. The study analyzes various scenarios involving different geohazard types, PGD values, and crossing angles, comparing the predictions with results obtained from finite element methods. The findings demonstrate a good agreement between the results of the proposed method and the finite element method, highlighting its potential as a simulation-free, data-free, and meshless alternative. This study paves the way for further advancements, such as the simulation-free reliability assessment of pipes subjected to PGD, as part of ongoing research that leverages the proposed method.Keywords: strain demand, inelastic pipe, permanent ground displacement, machine learning, physics-informed neural network
Procedia PDF Downloads 61