Search results for: rapid manufacturing
282 Ethical, Legal and Societal Aspects of Unmanned Aircraft in Defence
Authors: Henning Lahmann, Benjamyn I. Scott, Bart Custers
Abstract:
Suboptimal adoption of AI in defence organisations carries risks for the protection of the freedom, safety, and security of society. Despite the vast opportunities that defence AI-technology presents, there are also a variety of ethical, legal, and societal concerns. To ensure the successful use of AI technology by the military, ethical, legal, and societal aspects (ELSA) need to be considered, and their concerns continuously addressed at all levels. This includes ELSA considerations during the design, manufacturing and maintenance of AI-based systems, as well as its utilisation via appropriate military doctrine and training. This raises the question how defence organisations can remain strategically competitive and at the edge of military innovation, while respecting the values of its citizens. This paper will explain the set-up and share preliminary results of a 4-year research project commissioned by the National Research Council in the Netherlands on the ethical, legal, and societal aspects of AI in defence. The project plans to develop a future-proof, independent, and consultative ecosystem for the responsible use of AI in the defence domain. In order to achieve this, the lab shall devise a context-dependent methodology that focuses on the ‘analysis’, ‘design’ and ‘evaluation’ of ELSA of AI-based applications within the military context, which include inter alia unmanned aircraft. This is bolstered as the Lab also recognises and complements the existing methods in regards to human-machine teaming, explainable algorithms, and value-sensitive design. Such methods will be modified for the military context and applied to pertinent case-studies. These case-studies include, among others, the application of autonomous robots (incl. semi- autonomous) and AI-based methods against cognitive warfare. As the perception of the application of AI in the military context, by both society and defence personnel, is important, the Lab will study how these perceptions evolve and vary in different contexts. Furthermore, the Lab will monitor – as they may influence people’s perception – developments in the global technological, military and societal spheres. Although the emphasis of the research project is on different forms of AI in defence, it focuses on several case studies. One of these case studies is on unmanned aircraft, which will also be the focus of the paper. Hence, ethical, legal, and societal aspects of unmanned aircraft in the defence domain will be discussed in detail, including but not limited to privacy issues. Typical other issues concern security (for people, objects, data or other aircraft), privacy (sensitive data, hindrance, annoyance, data collection, function creep), chilling effects, PlayStation mentality, and PTSD.Keywords: autonomous weapon systems, unmanned aircraft, human-machine teaming, meaningful human control, value-sensitive design
Procedia PDF Downloads 93281 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning
Authors: Akeel A. Shah, Tong Zhang
Abstract:
Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning
Procedia PDF Downloads 41280 Occipital Squama Convexity and Neurocranial Covariation in Extant Homo sapiens
Authors: Miranda E. Karban
Abstract:
A distinctive pattern of occipital squama convexity, known as the occipital bun or chignon, has traditionally been considered a derived Neandertal trait. However, some early modern and extant Homo sapiens share similar occipital bone morphology, showing pronounced internal and external occipital squama curvature and paralambdoidal flattening. It has been posited that these morphological patterns are homologous in the two groups, but this claim remains disputed. Many developmental hypotheses have been proposed, including assertions that the chignon represents a developmental response to a long and narrow cranial vault, a narrow or flexed basicranium, or a prognathic face. These claims, however, remain to be metrically quantified in a large subadult sample, and little is known about the feature’s developmental, functional, or evolutionary significance. This study assesses patterns of chignon development and covariation in a comparative sample of extant human growth study cephalograms. Cephalograms from a total of 549 European-derived North American subjects (286 male, 263 female) were scored on a 5-stage ranking system of chignon prominence. Occipital squama shape was found to exist along a continuum, with 34 subjects (6.19%) possessing defined chignons, and 54 subjects (9.84%) possessing very little occipital squama convexity. From this larger sample, those subjects represented by a complete radiographic series were selected for metric analysis. Measurements were collected from lateral and posteroanterior (PA) cephalograms of 26 subjects (16 male, 10 female), each represented at 3 longitudinal age groups. Age group 1 (range: 3.0-6.0 years) includes subjects during a period of rapid brain growth. Age group 2 (range: 8.0-9.5 years) includes subjects during a stage in which brain growth has largely ceased, but cranial and facial development continues. Age group 3 (range: 15.9-20.4 years) includes subjects at their adult stage. A total of 16 landmarks and 153 sliding semi-landmarks were digitized at each age point, and geometric morphometric analyses, including relative warps analysis and two-block partial least squares analysis, were conducted to study covariation patterns between midsagittal occipital bone shape and other aspects of craniofacial morphology. A convex occipital squama was found to covary significantly with a low, elongated neurocranial vault, and this pattern was found to exist from the youngest age group. Other tested patterns of covariation, including cranial and basicranial breadth, basicranial angle, midcoronal cranial vault shape, and facial prognathism, were not found to be significant at any age group. These results suggest that the chignon, at least in this sample, should not be considered an independent feature, but rather the result of developmental interactions relating to neurocranial elongation. While more work must be done to quantify chignon morphology in fossil subadults, this study finds no evidence to disprove the developmental homology of the feature in modern humans and Neandertals.Keywords: chignon, craniofacial covariation, human cranial development, longitudinal growth study, occipital bun
Procedia PDF Downloads 201279 3D Label-Free Bioimaging of Native Tissue with Selective Plane Illumination Optical Microscopy
Authors: Jing Zhang, Yvonne Reinwald, Nick Poulson, Alicia El Haj, Chung See, Mike Somekh, Melissa Mather
Abstract:
Biomedical imaging of native tissue using light offers the potential to obtain excellent structural and functional information in a non-invasive manner with good temporal resolution. Image contrast can be derived from intrinsic absorption, fluorescence, or scatter, or through the use of extrinsic contrast. A major challenge in applying optical microscopy to in vivo tissue imaging is the effects of light attenuation which limits light penetration depth and achievable imaging resolution. Recently Selective Plane Illumination Microscopy (SPIM) has been used to map the 3D distribution of fluorophores dispersed in biological structures. In this approach, a focused sheet of light is used to illuminate the sample from the side to excite fluorophores within the sample of interest. Images are formed based on detection of fluorescence emission orthogonal to the illumination axis. By scanning the sample along the detection axis and acquiring a stack of images, 3D volumes can be obtained. The combination of rapid image acquisition speeds with the low photon dose to samples optical sectioning provides SPIM is an attractive approach for imaging biological samples in 3D. To date all implementations of SPIM rely on the use of fluorescence reporters be that endogenous or exogenous. This approach has the disadvantage that in the case of exogenous probes the specimens are altered from their native stage rendering them unsuitable for in vivo studies and in general fluorescence emission is weak and transient. Here we present for the first time to our knowledge a label-free implementation of SPIM that has downstream applications in the clinical setting. The experimental set up used in this work incorporates both label-free and fluorescent illumination arms in addition to a high specification camera that can be partitioned for simultaneous imaging of both fluorescent emission and scattered light from intrinsic sources of optical contrast in the sample being studied. This work first involved calibration of the imaging system and validation of the label-free method with well characterised fluorescent microbeads embedded in agarose gel. 3D constructs of mammalian cells cultured in agarose gel with varying cell concentrations were then imaged. A time course study to track cell proliferation in the 3D construct was also carried out and finally a native tissue sample was imaged. For each sample multiple images were obtained by scanning the sample along the axis of detection and 3D maps reconstructed. The results obtained validated label-free SPIM as a viable approach for imaging cells in a 3D gel construct and native tissue. This technique has the potential use in a near-patient environment that can provide results quickly and be implemented in an easy to use manner to provide more information with improved spatial resolution and depth penetration than current approaches.Keywords: bioimaging, optics, selective plane illumination microscopy, tissue imaging
Procedia PDF Downloads 249278 Detection and Quantification of Viable but Not Culturable Vibrio Parahaemolyticus in Frozen Bivalve Molluscs
Authors: Eleonora Di Salvo, Antonio Panebianco, Graziella Ziino
Abstract:
Background: Vibrio parahaemolyticus is a human pathogen that is widely distributed in marine environments. It is frequently isolated from raw seafood, particularly shellfish. Consumption of raw or undercooked seafood contaminated with V. parahaemolyticus may lead to acute gastroenteritis. Vibrio spp. has excellent resistance to low temperatures so it can be found in frozen products for a long time. Recently, the viable but non-culturable state (VBNC) of bacteria has attracted great attention, and more than 85 species of bacteria have been demonstrated to be capable of entering this state. VBNC cells cannot grow in conventional culture medium but are viable and maintain metabolic activity, which may constitute an unrecognized source of food contamination and infection. Also V. parahaemolyticus could exist in VBNC state under nutrient starvation or low-temperature conditions. Aim: The aim of the present study was to optimize methods and investigate V. parahaemolyticus VBNC cells and their presence in frozen bivalve molluscs, regularly marketed. Materials and Methods: propidium monoazide (PMA) was integrated with real-time polymerase chain reaction (qPCR) targeting the tl gene to detect and quantify V. parahaemolyticus in the VBNC state. PMA-qPCR resulted highly specific to V. parahaemolyticus with a limit of detection (LOD) of 10-1 log CFU/mL in pure bacterial culture. A standard curve for V. parahaemolyticus cell concentrations was established with the correlation coefficient of 0.9999 at the linear range of 1.0 to 8.0 log CFU/mL. A total of 77 samples of frozen bivalve molluscs (35 mussels; 42 clams) were subsequently subjected to the qualitative (on alkaline phosphate buffer solution) and quantitative research of V. parahaemolyticus on thiosulfate-citrate-bile salts-sucrose (TCBS) agar (DIFCO) NaCl 2.5%, and incubation at 30°C for 24-48 hours. Real-time PCR was conducted on homogenate samples, in duplicate, with and without propidium monoazide (PMA) dye, and exposed for 45 min under halogen lights (650 W). Total DNA was extracted from cell suspension in homogenate samples according to bolliture protocol. The Real-time PCR was conducted with species-specific primers for V. parahaemolitycus. The RT-PCR was performed in a final volume of 20 µL, containing 10 µL of SYBR Green Mixture (Applied Biosystems), 2 µL of template DNA, 2 µL of each primer (final concentration 0.6 mM), and H2O 4 µL. The qPCR was carried out on CFX96 TouchTM (Bio-Rad, USA). Results: All samples were negative both to the quantitative and qualitative detection of V. parahaemolyticus by the classical culturing technique. The PMA-qPCR let us individuating VBNC V. parahaemolyticus in the 20,78% of the samples evaluated with a value between the Log 10-1 and Log 10-3 CFU/g. Only clams samples were positive for PMA-qPCR detection. Conclusion: The present research is the first evaluating PMA-qPCR assay for detection of VBNC V. parahaemolyticus in bivalve molluscs samples, and the used method was applicable to the rapid control of marketed bivalve molluscs. We strongly recommend to use of PMA-qPCR in order to identify VBNC forms, undetectable by the classic microbiological methods. A precise knowledge of the V.parahaemolyticus in a VBNC form is fundamental for the correct risk assessment not only in bivalve molluscs but also in other seafood.Keywords: food safety, frozen bivalve molluscs, PMA dye, Real-time PCR, VBNC state, Vibrio parahaemolyticus
Procedia PDF Downloads 139277 Advancing Agriculture through Technology: An Abstract of Research Findings
Authors: Eugene Aninagyei-Bonsu
Abstract:
Introduction: Agriculture has been a cornerstone of human civilization, ensuring food security and livelihoods for billions of people worldwide. In recent decades, rapid advancements in technology have revolutionized the agricultural sector, offering innovative solutions to enhance productivity, sustainability, and efficiency. This abstract summarizes key findings from a research study that explores the impacts of technology in modern agriculture and its implications for future food production systems. Methodologies: The research study employed a mixed-methods approach, combining quantitative data analysis with qualitative interviews and surveys to gain a comprehensive understanding of the role of technology in agriculture. Data was collected from various stakeholders, including farmers, agricultural technicians, and industry experts, to capture diverse perspectives on the adoption and utilization of agricultural technologies. The study also utilized case studies and literature reviews to contextualize the findings within the broader agricultural landscape. Major Findings: The research findings reveal that technology plays a pivotal role in transforming traditional farming practices and driving innovation in agriculture. Advanced technologies such as precision agriculture, drone technology, genetic engineering, and smart irrigation systems have significantly improved crop yields, reduced environmental impact, and optimized resource utilization. Farmers who have embraced these technologies have reported increased productivity, enhanced profitability, and improved resilience to environmental challenges. Furthermore, the study highlights the importance of accessible and affordable technology solutions for smallholder farmers in developing countries. Mobile applications, sensor technologies, and digital platforms have enabled small-scale farmers to access market information, weather forecasts, and agricultural best practices, empowering them to make informed decisions and improve their livelihoods. The research emphasizes the need for targeted policies and investments to bridge the digital divide and promote equitable technology adoption in agriculture. Conclusion: In conclusion, this research underscores the transformative potential of technology in agriculture and its critical role in advancing sustainable food production systems. The findings suggest that harnessing technology can address key challenges facing the agricultural sector, including climate change, resource scarcity, and food insecurity. By embracing innovation and leveraging technology, farmers can enhance their productivity, profitability, and resilience in a rapidly evolving global food system. Moving forward, policymakers, researchers, and industry stakeholders must collaborate to facilitate the adoption of appropriate technologies, support capacity building, and promote sustainable agricultural practices for a more resilient and food-secure future.Keywords: technology development in modern agriculture, the influence of information technology access in agriculture, analyzing agricultural technology development, analyzing of the frontier technology of agriculture loT
Procedia PDF Downloads 35276 Chemical and Biomolecular Detection at a Polarizable Electrical Interface
Authors: Nicholas Mavrogiannis, Francesca Crivellari, Zachary Gagnon
Abstract:
Development of low-cost, rapid, sensitive and portable biosensing systems are important for the detection and prevention of disease in developing countries, biowarfare/antiterrorism applications, environmental monitoring, point-of-care diagnostic testing and for basic biological research. Currently, the most established commercially available and widespread assays for portable point of care detection and disease testing are paper-based dipstick and lateral flow test strips. These paper-based devices are often small, cheap and simple to operate. The last three decades in particular have seen an emergence in these assays in diagnostic settings for detection of pregnancy, HIV/AIDS, blood glucose, Influenza, urinary protein, cardiovascular disease, respiratory infections and blood chemistries. Such assays are widely available largely because they are inexpensive, lightweight, and portable, are simple to operate, and a few platforms are capable of multiplexed detection for a small number of sample targets. However, there is a critical need for sensitive, quantitative and multiplexed detection capabilities for point-of-care diagnostics and for the detection and prevention of disease in the developing world that cannot be satisfied by current state-of-the-art paper-based assays. For example, applications including the detection of cardiac and cancer biomarkers and biothreat applications require sensitive multiplexed detection of analytes in the nM and pM range, and cannot currently be satisfied with current inexpensive portable platforms due to their lack of sensitivity, quantitative capabilities and often unreliable performance. In this talk, inexpensive label-free biomolecular detection at liquid interfaces using a newly discovered electrokinetic phenomenon known as fluidic dielectrophoresis (fDEP) is demonstrated. The electrokinetic approach involves exploiting the electrical mismatches between two aqueous liquid streams forced to flow side-by-side in a microfluidic T-channel. In this system, one fluid stream is engineered to have a higher conductivity relative to its neighbor which has a higher permittivity. When a “low” frequency (< 1 MHz) alternating current (AC) electrical field is applied normal to this fluidic electrical interface the fluid stream with high conductivity displaces into the low conductive stream. Conversely, when a “high” frequency (20MHz) AC electric field is applied, the high permittivity stream deflects across the microfluidic channel. There is, however, a critical frequency sensitive to the electrical differences between each fluid phase – the fDEP crossover frequency – between these two events where no fluid deflection is observed, and the interface remains fixed when exposed to an external field. To perform biomolecular detection, two streams flow side-by-side in a microfluidic T-channel: one fluid stream with an analyte of choice and an adjacent stream with a specific receptor to the chosen target. The two fluid streams merge and the fDEP crossover frequency is measured at different axial positions down the resulting liquidKeywords: biodetection, fluidic dielectrophoresis, interfacial polarization, liquid interface
Procedia PDF Downloads 446275 Physiological Effects on Scientist Astronaut Candidates: Hypobaric Training Assessment
Authors: Pedro Llanos, Diego García
Abstract:
This paper is addressed to expanding our understanding of the effects of hypoxia training on our bodies to better model its dynamics and leverage some of its implications and effects on human health. Hypoxia training is a recommended practice for military and civilian pilots that allow them to recognize their early hypoxia signs and symptoms, and Scientist Astronaut Candidates (SACs) who underwent hypobaric hypoxia (HH) exposure as part of a training activity for prospective suborbital flight applications. This observational-analytical study describes physiologic responses and symptoms experienced by a SAC group before, during and after HH exposure and proposes a model for assessing predicted versus observed physiological responses. A group of individuals with diverse Science Technology Engineering Mathematics (STEM) backgrounds conducted a hypobaric training session to an altitude up to 22,000 ft (FL220) or 6,705 meters, where heart rate (HR), breathing rate (BR) and core temperature (Tc) were monitored with the use of a chest strap sensor pre and post HH exposure. A pulse oximeter registered levels of saturation of oxygen (SpO2), number and duration of desaturations during the HH chamber flight. Hypoxia symptoms as described by the SACs during the HH training session were also registered. This data allowed to generate a preliminary predictive model of the oxygen desaturation and O2 pressure curve for each subject, which consists of a sixth-order polynomial fit during exposure, and a fifth or fourth-order polynomial fit during recovery. Data analysis showed that HR and BR showed no significant differences between pre and post HH exposure in most of the SACs, while Tc measures showed slight but consistent decrement changes. All subjects registered SpO2 greater than 94% for the majority of their individual HH exposures, but all of them presented at least one clinically significant desaturation (SpO2 < 85% for more than 5 seconds) and half of the individuals showed SpO2 below 87% for at least 30% of their HH exposure time. Finally, real time collection of HH symptoms presented temperature somatosensory perceptions (SP) for 65% of individuals, and task-focus issues for 52.5% of individuals as the most common HH indications. 95% of the subjects experienced HH onset symptoms below FL180; all participants achieved full recovery of HH symptoms within 1 minute of donning their O2 mask. The current HH study performed on this group of individuals suggests a rapid and fully reversible physiologic response after HH exposure as expected and obtained in previous studies. Our data showed consistent results between predicted versus observed SpO2 curves during HH suggesting a mathematical function that may be used to model HH performance deficiencies. During the HH study, real-time HH symptoms were registered providing evidenced SP and task focusing as the earliest and most common indicators. Finally, an assessment of HH signs of symptoms in a group of heterogeneous, non-pilot individuals showed similar results to previous studies in homogeneous populations of pilots.Keywords: slow onset hypoxia, hypobaric chamber training, altitude sickness, symptoms and altitude, pressure cabin
Procedia PDF Downloads 116274 Monitoring of Formaldehyde over Punjab Pakistan Using Car Max-Doas and Satellite Observation
Authors: Waqas Ahmed Khan, Faheem Khokhaar
Abstract:
Air pollution is one of the main perpetrators of climate change. GHGs cause melting of glaciers and cause change in temperature and heavy rain fall many gasses like Formaldehyde is not direct precursor that damage ozone like CO2 or Methane but Formaldehyde (HCHO) form glyoxal (CHOCHO) that has effect on ozone. Countries around the globe have unique air quality monitoring protocols to describe local air pollution. Formaldehyde is a colorless, flammable, strong-smelling chemical that is used in building materials and to produce many household products and medical preservatives. Formaldehyde also occurs naturally in the environment. It is produced in small amounts by most living organisms as part of normal metabolic processes. Pakistan lacks the monitoring facilities on larger scale to measure the atmospheric gasses on regular bases. Formaldehyde is formed from Glyoxal and effect mountain biodiversity and livelihood. So its monitoring is necessary in order to maintain and preserve biodiversity. Objective: Present study is aimed to measure atmospheric HCHO vertical column densities (VCDs) obtained from ground-base and compute HCHO data in Punjab and elevated areas (Rawalpindi & Islamabad) by satellite observation during the time period of 2014-2015. Methodology: In order to explore the spatial distributing of H2CO, various fields campaigns including international scientist by using car Max-Doas. Major focus was on the cities along national highways and industrial region of Punjab Pakistan. Level 2 data product of satellite instruments OMI retrieved by differential optical absorption spectroscopy (DOAS) technique are used. Spatio-temporal distribution of HCHO column densities over main cities and region of Pakistan has been discussed. Results: Results show the High HCHO column densities exceeding permissible limit over the main cities of Pakistan particularly the areas with rapid urbanization and enhanced economic growth. The VCDs value over elevated areas of Pakistan like Islamabad, Rawalpindi is around 1.0×1016 to 34.01×1016 Molecules’/cm2. While Punjab has values revolving around the figure 34.01×1016. Similarly areas with major industrial activity showed high amount of HCHO concentrations. Tropospheric glyoxal VCDs were found to be 4.75 × 1015 molecules/cm2. Conclusion: Results shows that monitoring site surrounded by Margalla hills (Islamabad) have higher concentrations of Formaldehyde. Wind data shows that industrial areas and areas having high economic growth have high values as they provide pathways for transmission of HCHO. Results obtained from this study would help EPA, WHO and air protection departments in order to monitor air quality and further preservation and restoration of mountain biodiversity.Keywords: air quality, formaldehyde, Max-Doas, vertical column densities (VCDs), satellite instrument, climate change
Procedia PDF Downloads 212273 Novel Framework for MIMO-Enhanced Robust Selection of Critical Control Factors in Auto Plastic Injection Moulding Quality Optimization
Authors: Seyed Esmail Seyedi Bariran, Khairul Salleh Mohamed Sahari
Abstract:
Apparent quality defects such as warpage, shrinkage, weld line, etc. are such an irresistible phenomenon in mass production of auto plastic appearance parts. These frequently occurred manufacturing defects should be satisfied concurrently so as to achieve a final product with acceptable quality standards. Determining the significant control factors that simultaneously affect multiple quality characteristics can significantly improve the optimization results by eliminating the deviating effect of the so-called ineffective outliers. Hence, a robust quantitative approach needs to be developed upon which major control factors and their level can be effectively determined to help improve the reliability of the optimal processing parameter design. Hence, the primary objective of current study was to develop a systematic methodology for selection of significant control factors (SCF) relevant to multiple quality optimization of auto plastic appearance part. Auto bumper was used as a specimen with the most identical quality and production characteristics to APAP group. A preliminary failure modes and effect analysis (FMEA) was conducted to nominate a database of pseudo significant significant control factors prior to the optimization phase. Later, CAE simulation Moldflow analysis was implemented to manipulate four rampant plastic injection quality defects concerned with APAP group including warpage deflection, volumetric shrinkage, sink mark and weld line. Furthermore, a step-backward elimination searching method (SESME) has been developed for systematic pre-optimization selection of SCF based on hierarchical orthogonal array design and priority-based one-way analysis of variance (ANOVA). The development of robust parameter design in the second phase was based on DOE module powered by Minitab v.16 statistical software. Based on the F-test (F 0.05, 2, 14) one-way ANOVA results, it was concluded that for warpage deflection, material mixture percentage was the most significant control factor yielding a 58.34% of contribution while for the other three quality defects, melt temperature was the most significant control factor with a 25.32%, 84.25%, and 34.57% contribution for sin mark, shrinkage and weld line strength control. Also, the results on the he least significant control factors meaningfully revealed injection fill time as the least significant factor for both warpage and sink mark with respective 1.69% and 6.12% contribution. On the other hand, for shrinkage and weld line defects, the least significant control factors were holding pressure and mold temperature with a 0.23% and 4.05% overall contribution accordingly.Keywords: plastic injection moulding, quality optimization, FMEA, ANOVA, SESME, APAP
Procedia PDF Downloads 348272 Improving a Stagnant River Reach Water Quality by Combining Jet Water Flow and Ultrasonic Irradiation
Authors: A. K. Tekile, I. L. Kim, J. Y. Lee
Abstract:
Human activities put freshwater quality under risk, mainly due to expansion of agriculture and industries, damming, diversion and discharge of inadequately treated wastewaters. The rapid human population growth and climate change escalated the problem. External controlling actions on point and non-point pollution sources are long-term solution to manage water quality. To have a holistic approach, these mechanisms should be coupled with the in-water control strategies. The available in-lake or river methods are either costly or they have some adverse effect on the ecological system that the search for an alternative and effective solution with a reasonable balance is still going on. This study aimed at the physical and chemical water quality improvement in a stagnant Yeo-cheon River reach (Korea), which has recently shown sign of water quality problems such as scum formation and fish death. The river water quality was monitored, for the duration of three months by operating only water flow generator in the first two weeks and then ultrasonic irradiation device was coupled to the flow unit for the remaining duration of the experiment. In addition to assessing the water quality improvement, the correlation among the parameters was analyzed to explain the contribution of the ultra-sonication. Generally, the combined strategy showed localized improvement of water quality in terms of dissolved oxygen, Chlorophyll-a and dissolved reactive phosphate. At locations under limited influence of the system operation, chlorophyll-a was highly increased, but within 25 m of operation the low initial value was maintained. The inverse correlation coefficient between dissolved oxygen and chlorophyll-a decreased from 0.51 to 0.37 when ultrasonic irradiation unit was used with the flow, showing that ultrasonic treatment reduced chlorophyll-a concentration and it inhibited photosynthesis. The relationship between dissolved oxygen and reactive phosphate also indicated that influence of ultra-sonication was higher than flow on the reactive phosphate concentration. Even though flow increased turbidity by suspending sediments, ultrasonic waves canceled out the effect due to the agglomeration of suspended particles and the follow-up settling out. There has also been variation of interaction in the water column as the decrease of pH and dissolved oxygen from surface to the bottom played a role in phosphorus release into the water column. The variation of nitrogen and dissolved organic carbon concentrations showed mixed trend probably due to the complex chemical reactions subsequent to the operation. Besides, the intensive rainfall and strong wind around the end of the field trial had apparent impact on the result. The combined effect of water flow and ultrasonic irradiation was a cumulative water quality improvement and it maintained the dissolved oxygen and chlorophyll-a requirement of the river for healthy ecological interaction. However, the overall improvement of water quality is not guaranteed as effectiveness of ultrasonic technology requires long-term monitoring of water quality before, during and after treatment. Even though, the short duration of the study conducted here has limited nutrient pattern realization, the use of ultrasound at field scale to improve water quality is promising.Keywords: stagnant, ultrasonic irradiation, water flow, water quality
Procedia PDF Downloads 193271 Northern Istanbul Urban Infrastructure Projects: A Critical Account on the Environmental, Spatial, Social and Economical Impacts
Authors: Evren Aysev Denec
Abstract:
As an urban settlement dating as early as 8000 years and the capital for Byzantine and Ottoman empires; İstanbul has been a significant global city throughout history. The most drastic changes in the macro form of Istanbul have taken place in the last seven decades; starting from 1950’s with rapid industrialization and population growth; pacing up after the 1980’s with the efforts of integration to the global capitalist system; reaching to a climax in the 2000’s with the adaptation of a neoliberal urban regime. Today, the rate of urbanization together with land speculation and real estate investment has been growing enormously. Every inch of urban land is conceptualized as a commodity to be capitalized. This neoliberal mindset has many controversial implementations, from the privatization of public land to the urban transformation of historic neighbourhoods and consumption of natural resources. The planning decisions concerning the city have been mainly top down initiations; conceptualising historical, cultural and natural heritage as commodities to be capitalised and consumed in favour of creating rent value. One of the most crucial implementations of this neoliberal urban regime is the project of establishing a ‘new city’ around northern Istanbul; together with a number of large-scale infrastructural projects such as the Third Bosporus Bridge; a new highway system, a Third Airport Project and a secondary Bosporus project called the ‘Canal Istanbul’. Urbanizing northern Istanbul is highly controversial as this area consists of major natural resources of the city; being the northern forests, water supplies and wildlife; which are bound to be destroyed to a great extent following the implementations. The construction of the third bridge and the third airport has begun in 2013, despite environmental objections and protests. Over five hundred thousand trees are planned be cut for solely the construction of the bridge and the Northern Marmara Motorway. Yet the real damage will be the urbanization of the forest area; irreversibly corrupting the natural resources and attracting millions of additional population towards Istanbul. Furthermore, these projects lack an integrated planning scope as the plans prepared for Istanbul are constantly subjected to alterations forced by the central government. Urban interventions mentioned above are executed despite the rulings of Istanbul Environmental plan, due to top down planning decisions. Instead of an integrated action plan that prepares for the future of the city, Istanbul is governed by partial plans and projects that are issued by a profit based agenda; supported by legal alterations and laws issued by the central government. This paper aims to discuss the ongoing implementations with regards to northern Istanbul; claiming that they are not merely infrastructural interventions but parts of a greater neoliberal urbanization strategy. In the course of the study, firstly a brief account on the northern forests of Istanbul will be presented. Then, the projects will be discussed in detail, addressing how the current planning schemes deal with the natural heritage of the city. Lastly, concluding remarks on how the implementations could affect the future of Istanbul will be presented.Keywords: Istanbul, urban design, urban planning, natural resources
Procedia PDF Downloads 198270 Combat Plastic Entering in Kanpur City, Uttar Pradesh, India Marine Environment
Authors: Arvind Kumar
Abstract:
The city of Kanpur is located in the terrestrial plain area on the bank of the river Ganges and is the second largest city in the state of Uttar Pradesh. The city generates approximately 1400-1600 tons per day of MSW. Kanpur has been known as a major point and non-points-based pollution hotspot for the river Ganges. The city has a major industrial hub, probably the largest in the state, catering to the manufacturing and recycling of plastic and other dry waste streams. There are 4 to 5 major drains flowing across the city, which receive a significant quantity of waste leakage, which subsequently adds to the Ganges flow and is carried to the Bay of Bengal. A river-to-sea flow approach has been established to account for leaked waste into urban drains, leading to the build-up of marine litter. Throughout its journey, the river accumulates plastic – macro, meso, and micro, from various sources and transports it towards the sea. The Ganges network forms the second-largest plastic-polluting catchment in the world, with over 0.12 million tonnes of plastic discharged into marine ecosystems per year and is among 14 continental rivers into which over a quarter of global waste is discarded 3.150 Kilo tons of plastic waste is generated in Kanpur, out of which 10%-13% of plastic is leaked into the local drains and water flow systems. With the Support of Kanpur Municipal Corporation, 1TPD capacity MRF for drain waste management was established at Krishna Nagar, Kanpur & A German startup- Plastic Fisher, was identified for providing a solution to capture the drain waste and achieve its recycling in a sustainable manner with a circular economy approach. The team at Plastic Fisher conducted joint surveys and identified locations on 3 drains at Kanpur using GIS maps developed during the survey. It suggested putting floating 'Boom Barriers' across the drains with a low-cost material, which reduced their cost to only 2000 INR per barrier. The project was built upon the self-sustaining financial model. The project includes activities where a cost-efficient model is developed and adopted for a socially self-inclusive model. The project has recommended the use of low-cost floating boom barriers for capturing waste from drains. This involves a one-time time cost and has no operational cost. Manpower is engaged in fishing and capturing immobilized waste, whose salaries are paid by the Plastic Fisher. The captured material is sun-dried and transported to the designated place, where the shed and power connection, which act as MRF, are provided by the city Municipal corporation. Material aggregation, baling, and transportation costs to end-users are borne by Plastic Fisher as well.Keywords: Kanpur, marine environment, drain waste management, plastic fisher
Procedia PDF Downloads 71269 Option Pricing Theory Applied to the Service Sector
Authors: Luke Miller
Abstract:
This paper develops an options pricing methodology to value strategic pricing strategies in the services sector. More specifically, this study provides a unifying taxonomy of current service sector pricing practices, frames these pricing decisions as strategic real options, demonstrates accepted option valuation techniques to assess service sector pricing decisions, and suggests future research areas where pricing decisions and real options overlap. Enhancing revenue in the service sector requires proactive decision making in a world of uncertainty. In an effort to strategically price service products, revenue enhancement necessitates a careful study of the service costs, customer base, competition, legalities, and shared economies with the market. Pricing decisions involve the quality of inputs, manpower, and best practices to maintain superior service. These decisions further hinge on identifying relevant pricing strategies and understanding how these strategies impact a firm’s value. A relatively new area of research applies option pricing theory to investments in real assets and is commonly known as real options. The real options approach is based on the premise that many corporate decisions to invest or divest in assets are simply an option wherein the firm has the right to make an investment without any obligation to act. The decision maker, therefore, has more flexibility and the value of this operating flexibility should be taken into consideration. The real options framework has already been applied to numerous areas including manufacturing, inventory, natural resources, research and development, strategic decisions, technology, and stock valuation. Additionally, numerous surveys have identified a growing need for the real options decision framework within all areas of corporate decision-making. Despite the wide applicability of real options, no study has been carried out linking service sector pricing decisions and real options. This is surprising given the service sector comprises 80% of the US employment and Gross Domestic Product (GDP). Identifying real options as a practical tool to value different service sector pricing strategies is believed to have a significant impact on firm decisions. This paper identifies and discusses four distinct pricing strategies available to the service sector from an options’ perspective: (1) Cost-based profit margin, (2) Increased customer base, (3) Platform pricing, and (4) Buffet pricing. Within each strategy lie several pricing tactics available to the service firm. These tactics can be viewed as options the decision maker has to best manage a strategic position in the market. To demonstrate the effectiveness of including flexibility in the pricing decision, a series of pricing strategies were developed and valued using a real options binomial lattice structure. The options pricing approach discussed in this study allows service firms to directly incorporate market-driven perspectives into the decision process and thus synchronizing service operations with organizational economic goals.Keywords: option pricing theory, real options, service sector, valuation
Procedia PDF Downloads 355268 Quantum Conductance Based Mechanical Sensors Fabricated with Closely Spaced Metallic Nanoparticle Arrays
Authors: Min Han, Di Wu, Lin Yuan, Fei Liu
Abstract:
Mechanical sensors have undergone a continuous evolution and have become an important part of many industries, ranging from manufacturing to process, chemicals, machinery, health-care, environmental monitoring, automotive, avionics, and household appliances. Concurrently, the microelectronics and microfabrication technology have provided us with the means of producing mechanical microsensors characterized by high sensitivity, small size, integrated electronics, on board calibration, and low cost. Here we report a new kind of mechanical sensors based on the quantum transport process of electrons in the closely spaced nanoparticle films covering a flexible polymer sheet. The nanoparticle films were fabricated by gas phase depositing of preformed metal nanoparticles with a controlled coverage on the electrodes. To amplify the conductance of the nanoparticle array, we fabricated silver interdigital electrodes on polyethylene terephthalate(PET) by mask evaporation deposition. The gaps of the electrodes ranged from 3 to 30μm. Metal nanoparticles were generated from a magnetron plasma gas aggregation cluster source and deposited on the interdigital electrodes. Closely spaced nanoparticle arrays with different coverage could be gained through real-time monitoring the conductance. In the film coulomb blockade and quantum, tunneling/hopping dominate the electronic conduction mechanism. The basic principle of the mechanical sensors relies on the mechanical deformation of the fabricated devices which are translated into electrical signals. Several kinds of sensing devices have been explored. As a strain sensor, the device showed a high sensitivity as well as a very wide dynamic range. A gauge factor as large as 100 or more was demonstrated, which can be at least one order of magnitude higher than that of the conventional metal foil gauges or even better than that of the semiconductor-based gauges with a workable maximum applied strain beyond 3%. And the strain sensors have a workable maximum applied strain larger than 3%. They provide the potential to be a new generation of strain sensors with performance superior to that of the currently existing strain sensors including metallic strain gauges and semiconductor strain gauges. When integrated into a pressure gauge, the devices demonstrated the ability to measure tiny pressure change as small as 20Pa near the atmospheric pressure. Quantitative vibration measurements were realized on a free-standing cantilever structure fabricated with closely-spaced nanoparticle array sensing element. What is more, the mechanical sensor elements can be easily scaled down, which is feasible for MEMS and NEMS applications.Keywords: gas phase deposition, mechanical sensors, metallic nanoparticle arrays, quantum conductance
Procedia PDF Downloads 274267 The Impact of Trade on Stock Market Integration of Emerging Markets
Authors: Anna M. Pretorius
Abstract:
The emerging markets category for portfolio investment was introduced in 1986 in an attempt to promote capital market development in less developed countries. Investors traditionally diversified their portfolios by investing in different developed markets. However, high growth opportunities forced investors to consider emerging markets as well. Examples include the rapid growth of the “Asian Tigers” during the 1980s, growth in Latin America during the 1990s and the increased interest in emerging markets during the global financial crisis. As such, portfolio flows to emerging markets have increased substantially. In 2002 7% of all equity allocations from advanced economies went to emerging markets; this increased to 20% in 2012. The stronger links between advanced and emerging markets led to increased synchronization of asset price movements. This increased level of stock market integration for emerging markets is confirmed by various empirical studies. Against the background of increased interest in emerging market assets and the increasing level of integration of emerging markets, this paper focuses on the determinants of stock market integration of emerging market countries. Various studies have linked the level of financial market integration with specific economic variables. These variables include: economic growth, local inflation, trade openness, local investment, budget surplus/ deficit, market capitalization, domestic bank credit, domestic institutional and legal environment and world interest rates. The aim of this study is to empirically investigate to what extent trade-related determinants have an impact on stock market integration. The panel data sample include data of 16 emerging market countries: Brazil, Chile, China, Colombia, Czech Republic, Hungary, India, Malaysia, Pakistan, Peru, Philippines, Poland, Russian Federation, South Africa, Thailand and Turkey for the period 1998-2011. The integration variable for each emerging stock market is calculated as the explanatory power of a multi-factor model. These factors are extracted from a large panel of global stock market returns. Trade related explanatory variables include: exports as percentage of GDP, imports as percentage of GDP and total trade as percentage of GDP. Other macroeconomic indicators – such as market capitalisation, the size of the budget deficit and the effectiveness of the regulation of the securities exchange – are included in the regressions as control variables. An initial analysis on a sample of developed stock markets could not identify any significant determinants of stock market integration. Thus the macroeconomic variables identified in the literature are much more significant in explaining stock market integration of emerging markets than stock market integration of developed markets. The three trade variables are all statistically significant at a 5% level. The market capitalisation variable is also significant while the regulation variable is only marginally significant. The global financial crisis has highlighted the urgency to better understand the link between the financial and real sectors of the economy. This paper comes to the important finding that, apart from the level of market capitalisation (as financial indicator), trade (representative of the real economy) is a significant determinant of stock market integration of countries not yet classified as developed economies.Keywords: emerging markets, financial market integration, panel data, trade
Procedia PDF Downloads 306266 Abortion Care Education in U.S. Accreditation Commission for Midwifery Education Certified Nurse Midwifery Programs: A Call For Expansion
Authors: Maggie Hall, Haley O'Neill
Abstract:
The U.S. faces a severe shortage of abortion providers, exacerbated by the June 2022 Dobbs v. Jackson Women’s Health Organization decision. Midwives, especially certified nurse midwives, are well-positioned to fill this gap in abortion care. However, a lack of clinical education and training prevents midwives from exercising their full scope of practice. National and international organizations that set obstetrics and midwifery education standards, including the International Confederation of Midwives, American College of Obstetricians and Gynecologists, and American Public Health Association, call for expansion of midwifery-managed abortion care through the first trimester. In the U.S., midwifery programs are accredited based on compliance with ACME standards and compliance is a prerequisite for the American Midwifery Certification Board exams. We conducted a literature review of studies in the last five years regarding abortion didactic and clinical education barriers via CINAHL, EBSCO and PubMed database reviews. We gave preference for primary sources within the last five years; however, due to the rapid changes in abortion education and access, we also included literature from 2012-2022. We evaluated ACME-accredited programs in relation to their geography within abortion-protected or restricted states and assessed state-specific barriers to abortion care education and provision as clinical students. There are 43 AMCB-accredited midwifery schools in 28 states across the U.S. Twenty schools (47%) are in the 15 states in which advanced practice clinicians can provide non-surgical abortion care, such as medication abortion and MVA procedures. Twenty-four schools (56%) are in the 16 states in which abortion care provision is restricted to Licensed Physicians and cannot offer in-state clinical training opportunities for midwifery students. Six schools are in the five states in which abortion is completely banned and are geographically concentrated in the southernmost region of the U.S., including Alabama, Kentucky, Louisiana, Tennessee, and Texas. Subsequently, these programs cannot offer in-state clinical training opportunities for midwifery students. Notably, there are seven ACME programs in six states that do not restrict abortion access by gestational age, including Colorado, Connecticut, Washington, D.C., New Jersey, New Mexico, and Oregon. These programs may be uniquely positioned for midwifery involvement in abortion care beyond the first trimester. While the following states don’t house ACME programs, abortion care can be provided by advanced practice clinicians in Rhode Island, Delaware, Hawaii, Maine, Maryland, Montana, New Hampshire, and Vermont, offering clinical placement and/or new ACME program development opportunities. We identify existing barriers to clinical education and training opportunities for midwifery-managed abortion care, which are both geographic and institutional in nature. We recommend expansion and standardization of clinical education and training opportunities for midwifery-managed abortion care in ACME-accredited programs to improve access to abortion care. Midwifery programs and teaching hospitals need to expand education, training, and residency opportunities for midwifery students to strengthen access to midwife-managed abortion care. ACNM and ACME should re-evaluate accreditation criteria and the implications of ACME programs in states where students are not able to learn abortion care in clinical contexts due to state-specific abortion restrictions.Keywords: midwifery education, abortion, abortion education, abortion access
Procedia PDF Downloads 81265 Critical Conditions for the Initiation of Dynamic Recrystallization Prediction: Analytical and Finite Element Modeling
Authors: Pierre Tize Mha, Mohammad Jahazi, Amèvi Togne, Olivier Pantalé
Abstract:
Large-size forged blocks made of medium carbon high-strength steels are extensively used in the automotive industry as dies for the production of bumpers and dashboards through the plastic injection process. The manufacturing process of the large blocks starts with ingot casting, followed by open die forging and a quench and temper heat treatment process to achieve the desired mechanical properties and numerical simulation is widely used nowadays to predict these properties before the experiment. But the temperature gradient inside the specimen remains challenging in the sense that the temperature before loading inside the material is not the same, but during the simulation, constant temperature is used to simulate the experiment because it is assumed that temperature is homogenized after some holding time. Therefore to be close to the experiment, real distribution of the temperature through the specimen is needed before the mechanical loading. Thus, We present here a robust algorithm that allows the calculation of the temperature gradient within the specimen, thus representing a real temperature distribution within the specimen before deformation. Indeed, most numerical simulations consider a uniform temperature gradient which is not really the case because the surface and core temperatures of the specimen are not identical. Another feature that influences the mechanical properties of the specimen is recrystallization which strongly depends on the deformation conditions and the type of deformation like Upsetting, Cogging...etc. Indeed, Upsetting and Cogging are the stages where the greatest deformations are observed, and a lot of microstructural phenomena can be observed, like recrystallization, which requires in-depth characterization. Complete dynamic recrystallization plays an important role in the final grain size during the process and therefore helps to increase the mechanical properties of the final product. Thus, the identification of the conditions for the initiation of dynamic recrystallization is still relevant. Also, the temperature distribution within the sample and strain rate influence the recrystallization initiation. So the development of a technique allowing to predict the initiation of this recrystallization remains challenging. In this perspective, we propose here, in addition to the algorithm allowing to get the temperature distribution before the loading stage, an analytical model leading to determine the initiation of this recrystallization. These two techniques are implemented into the Abaqus finite element software via the UAMP and VUHARD subroutines for comparison with a simulation where an isothermal temperature is imposed. The Artificial Neural Network (ANN) model to describe the plastic behavior of the material is also implemented via the VUHARD subroutine. From the simulation, the temperature distribution inside the material and recrystallization initiation is properly predicted and compared to the literature models.Keywords: dynamic recrystallization, finite element modeling, artificial neural network, numerical implementation
Procedia PDF Downloads 80264 The SHIFT of Consumer Behavior from Fast Fashion to Slow Fashion: A Review and Research Agenda
Authors: Priya Nangia, Sanchita Bansal
Abstract:
As fashion cycles become more rapid, some segments of the fashion industry have adopted increasingly unsustainable production processes to keep up with demand and enhance profit margins. The growing threat to environmental and social wellbeing posed by unethical fast fashion practices and the need to integrate the targets of SDGs into this industry necessitates a shift in the fashion industry's unsustainable nature, which can only be accomplished in the long run if consumers support sustainable fashion by purchasing it. Fast fashion is defined as low-cost, trendy apparel that takes inspiration from the catwalk or celebrity culture and rapidly transforms it into garments at high-street stores to meet consumer demand. Given the importance of identity formation to many consumers, the desire to be “fashionable” often outweighs the desire to be ethical or sustainable. This paradox exemplifies the tension between the human drive to consume and the will to do so in moderation. Previous research suggests that there is an attitude-behavior gap when it comes to determining consumer purchasing behavior, but to the best of our knowledge, no study has analysed how to encourage customers to shift from fast to slow fashion. Against this backdrop, the aim of this study is twofold: first, to identify and examine the factors that impact consumers' decisions to engage in sustainable fashion, and second, the authors develop a comprehensive framework for conceptualizing and encouraging researchers and practitioners to foster sustainable consumer behavior. This study used a systematic approach to collect data and analyse literature. The approach included three key steps: review planning, review execution, and findings reporting. Authors identified the keywords “sustainable consumption” and “sustainable fashion” and retrieved studies from the Web of Science (WoS) (126 records) and Scopus database (449 records). To make the study more specific, the authors refined the subject area to management, business, and economics in the second step, retrieving 265 records. In the third step, the authors removed the duplicate records and manually reviewed the articles to examine their relevance to the research issue. The final 96 research articles were used to develop this study's systematic scheme. The findings indicate that societal norms, demographics, positive emotions, self-efficacy, and awareness all have an effect on customers' decisions to purchase sustainable apparel. The authors propose a framework, denoted by the acronym SHIFT, in which consumers are more likely to engage in sustainable behaviors when the message or context leverages the following factors: (s)social influence, (h)habit formation, (i)individual self, (f)feelings, emotions, and cognition, and (t)tangibility. Furthermore, the authors identify five broad challenges that encourage sustainable consumer behavior and use them to develop novel propositions. Finally, the authors discuss how the SHIFT framework can be used in practice to drive sustainable consumer behaviors. This research sought to define the boundaries of existing research while also providing new perspectives on future research, with the goal of being useful for the development and discovery of new fields of study, thereby expanding knowledge.Keywords: consumer behavior, fast fashion, sustainable consumption, sustainable fashion, systematic literature review
Procedia PDF Downloads 90263 In-Situ Formation of Particle Reinforced Aluminium Matrix Composites by Laser Powder Bed Fusion of Fe₂O₃/AlSi12 Powder Mixture Using Consecutive Laser Melting+Remelting Strategy
Authors: Qimin Shi, Yi Sun, Constantinus Politis, Shoufeng Yang
Abstract:
In-situ preparation of particle-reinforced aluminium matrix composites (PRAMCs) by laser powder bed fusion (LPBF) additive manufacturing is a promising strategy to strengthen traditional Al-based alloys. The laser-driven thermite reaction can be a practical mechanism to in-situ synthesize PRAMCs. However, introducing oxygen elements through adding Fe₂O₃ makes the powder mixture highly sensitive to form porosity and Al₂O₃ film during LPBF, bringing challenges to producing dense Al-based materials. Therefore, this work develops a processing strategy combined with consecutive high-energy laser melting scanning and low-energy laser remelting scanning to prepare PRAMCs from a Fe₂O₃/AlSi12 powder mixture. The powder mixture consists of 5 wt% Fe₂O₃ and the remainder AlSi12 powder. The addition of 5 wt% Fe₂O₃ aims to achieve balanced strength and ductility. A high relative density (98.2 ± 0.55 %) was successfully obtained by optimizing laser melting (Emelting) and laser remelting surface energy density (Eremelting) to Emelting = 35 J/mm² and Eremelting = 5 J/mm². Results further reveal the necessity of increasing Emelting, to improve metal liquid’s spreading/wetting by breaking up the Al₂O₃ films surrounding the molten pools; however, the high-energy laser melting produced much porosity, including H₂₋, O₂₋ and keyhole-induced pores. The subsequent low-energy laser remelting could close the resulting internal pores, backfill open gaps and smoothen solidified surfaces. As a result, the material was densified by repeating laser melting and laser remelting layer by layer. Although with two-times laser scanning, the microstructure still shows fine cellular Si networks with Al grains inside (grain size of about 370 nm) and in-situ nano-precipitates (Al₂O₃, Si, and Al-Fe(-Si) intermetallics). Finally, the fine microstructure, nano-structured dispersion strengthening, and high-level densification strengthened the in-situ PRAMCs, reaching yield strength of 426 ± 4 MPa and tensile strength of 473 ± 6 MPa. Furthermore, the results can expect to provide valuable information to process other powder mixtures with severe porosity/oxide-film formation potential, considering the evidenced contribution of laser melting/remelting strategy to densify material and obtain good mechanical properties during LPBF.Keywords: densification, laser powder bed fusion, metal matrix composites, microstructures, mechanical properties
Procedia PDF Downloads 155262 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models
Authors: V. Mantey, N. Findlay, I. Maddox
Abstract:
The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.Keywords: building detection, disaster relief, mask-RCNN, satellite mapping
Procedia PDF Downloads 169261 An Overview on Micro Irrigation-Accelerating Growth of Indian Agriculture
Authors: Rohit Lall
Abstract:
The adoption of Micro Irrigation (MI) technologies in India has helped in achieving higher cropping and irrigation intensity with significant savings on resource savings such as labour, fertilizer and improved crop yields. These technologies have received considerable attention from policymakers, growers and researchers over the years for its perceived ability to contribute towards agricultural productivity and economic growth with the well-being of the growers of the country. Keeping the pace with untapped theoretical potential to cover government had launched flagship programs/centre sector schemes with earmarked budget to capture the potential under these waters saving techniques envisaged under these technologies by way of providing financial assistance to the beneficiaries for adopting these technologies. Micro Irrigation technologies have been in the special attention of the policymakers over the years. India being an agrarian economy having engaged 75% of the population directly or indirectly having skilled, semi-skilled and entrepreneurs in the sector with focused attention and financial allocations from the government under these technologies in covering the untapped potential under Pradhan Mantri Krishi Sinchayee Yojana (PMKSY) 'Per Drop More Crop component.' During the year 2004, a Taskforce on Micro Irrigation was constituted to estimate the potential of these technologies in India which was estimated 69.5 million hectares by the Task Force Report on MI however only 10.49 million hectares have been achieved so far. Technology collaborations by leading manufacturing companies in overseas have proved to a stepping stone in technology advancement and product up gradation with increased efficiencies. Joint ventures by the leading MI companies have added huge business volumes which have not only accelerated the momentum of achieving the desired goal but in terms of area coverage but had also generated opportunities for the polymer manufacturers in the country. To provide products matching the global standards Bureau of Indian Standards have constituted a sectional technical committee under the Food and Agriculture Department (FAD)-17 to formulated/devise and revise standards pertaining to MI technologies. The research lobby has also contributed at large by developing in-situ analysis proving MI technologies a boon for farming community of the country with resource conservation of which water is of paramount importance. Thus, Micro Irrigation technologies have proved to be the key tool for feeding the grueling demand of food basket of the growing population besides maintaining soil health and have been contributing towards doubling of farmers’ income.Keywords: task force on MI, standards, per drop more crop, doubling farmers’ income
Procedia PDF Downloads 117260 Collaborative Environmental Management: A Case Study Research of Stakeholders' Collaboration in the Nigerian Oil-Producing Region
Authors: Favour Makuochukwu Orji, Yingkui Zhao
Abstract:
A myriad of environmental issues face the Nigerian industrial region, resulting from; oil and gas production, mining, manufacturing and domestic wastes. Amidst these, much effort has been directed by stakeholders in the Nigerian oil producing regions, because of the impacts of the region on the wider Nigerian economy. Research to date has suggested that collaborative environmental management could be an effective approach in managing environmental issues; but little attention has been given to the roles and practices of stakeholders in effecting a collaborative environmental management framework for the Nigerian oil-producing region. This paper produces a framework to expand and deepen knowledge relating to stakeholders aspects of collaborative roles in managing environmental issues in the Nigeria oil-producing region. The knowledge is derived from analysis of stakeholders’ practices – studied through multiple case studies using document analysis. Selected documents of key stakeholders – Nigerian government agencies, multi-national oil companies and host communities, were analyzed. Open and selective coding was employed manually during document analysis of data collected from the offices and websites of the stakeholders. The findings showed that the stakeholders have a range of roles, practices, interests, drivers and barriers regarding their collaborative roles in managing environmental issues. While they have interests for efficient resource use, compliance to standards, sharing of responsibilities, generating of new solutions, and shared objectives; there is evidence of major barriers which includes resource allocation, disjointed policy and regulation, ineffective monitoring, diverse socio- economic interests, lack of stakeholders’ commitment and limited knowledge sharing. However, host communities hold deep concerns over the collaborative roles of stakeholders for economic interests, particularly, where government agencies and multi-national oil companies are involved. With these barriers and concerns, a genuine stakeholders’ collaboration is found to be limited, and as a result, optimal environmental management practices and policies have not been successfully implemented in the Nigeria oil-producing region. A framework is produced that describes practices that characterize collaborative environmental management might be employed to satisfy the stakeholders’ interests. The framework recommends critical factors, based on the findings, which may guide a collaborative environmental management in the oil producing regions. The recommendations are designed to re-define the practices of stakeholders in managing environmental issues in the oil producing regions, not as something wholly new, but as an approach essential for implementing a sustainable environmental policy. This research outcome may clarify areas for future research as well as to contribute to industry guidance in the area of collaborative environmental management.Keywords: collaborative environmental management framework, case studies, document analysis, multinational oil companies, Nigerian oil producing regions, Nigerian government agencies, stakeholders analysis
Procedia PDF Downloads 174259 Features of Composites Application in Shipbuilding
Authors: Valerii Levshakov, Olga Fedorova
Abstract:
Specific features of ship structures, made from composites, i.e. simultaneous shaping of material and structure, large sizes, complicated outlines and tapered thickness have defined leading role of technology, integrating test results from material science, designing and structural analysis. Main procedures of composite shipbuilding are contact molding, vacuum molding and winding. Now, the most demanded composite shipbuilding technology is the manufacture of structures from fiberglass and multilayer hybrid composites by means of vacuum molding. This technology enables the manufacture of products with improved strength properties (in comparison with contact molding), reduction of production duration, weight and secures better environmental conditions in production area. Mechanized winding is applied for the manufacture of parts, shaped as rotary bodies – i.e. parts of ship, oil and other pipelines, deep-submergence vehicles hulls, bottles, reservoirs and other structures. This procedure involves processing of reinforcing fiberglass, carbon and polyaramide fibers. Polyaramide fibers have tensile strength of 5000 MPa, elastic modulus value of 130 MPa and rigidity of the same can be compared with rigidity of fiberglass, however, the weight of polyaramide fiber is 30% less than weight of fiberglass. The same enables to the manufacture different structures, including that, using both – fiberglass and organic composites. Organic composites are widely used for the manufacture of parts with size and weight limitations. High price of polyaramide fiber restricts the use of organic composites. Perspective area of winding technology development is the manufacture of carbon fiber shafts and couplings for ships. JSC ‘Shipbuilding & Shiprepair Technology Center’ (JSC SSTC) developed technology of dielectric uncouplers for cryogenic lines, cooled by gaseous or liquid cryogenic agents (helium, nitrogen, etc.) for temperature range 4.2-300 K and pressure up to 30 MPa – the same is used for separating components of electro physical equipment with different electrical potentials. Dielectric uncouplers were developed, the manufactured and tested in accordance with International Thermonuclear Experimental Reactor (ITER) Technical specification. Spiral uncouplers withstand operating voltage of 30 kV, direct-flow uncoupler – 4 kV. Application of spiral channel instead of rectilinear enables increasing of breakdown potential and reduction of uncouplers sizes. 95 uncouplers were successfully the manufactured and tested. At the present time, Russian the manufacturers of ship composite structures have started absorption of technology of manufacturing the same using automated prepreg laminating; this technology enables the manufacture of structures with improved operational specifications.Keywords: fiberglass, infusion, polymeric composites, winding
Procedia PDF Downloads 238258 Higher Education in India Strength, Weakness, Opportunities and Threats
Authors: Renu Satish Nair
Abstract:
Indian higher education system is the third largest in the world next to United States and China. India is experiencing a rapid growth in higher education in terms of student enrollment as well as establishment of new universities, colleges and institutes of national importance. Presently about 22 million students are being enrolled in higher education and more than 46 thousand institutions’ are functioning as centers of higher education. Indian government plays a 'command and control' role in higher education. The main governing body is University Grants Commission, which enforces its standards, advises the government, and helps coordinate between the centre and the state. Accreditation of higher learning is over seen by 12 autonomous institutions established by the University Grants Commission. The present paper is an effort to analyze the strength, weakness, opportunities and threat (SWOT Analysis) of Indian Higher education system. The higher education in India is progressing ahead by virtue of its strength which is being recognized at global level. Several institutions of India, such as Indian Institutes of Technology (IITs), Indian Institutes of Management (IIMs) and National Institutes of Technology (NITs) have been globally acclaimed for their standard of education. Three Indian universities were listed in the Times Higher Education list of the world’s top 200 universities i.e. Indian Institutes of Technology, Indian Institute of Management and Jawahar Lal Nehru University in 2005 and 2006. Six Indian Institutes of Technology and the Birla Institute of Technology and Science - Pilani were listed among the top 20 science and technology schools in Asia by the Asia Week. The school of Business situated in Hyderabad was ranked number 12 in Globe MBA ranking by the Financial Times of London in 2010 while the All India Institute of Medical Sciences has been recognized as a global leader in medical research and treatment. But at the same time, because of vast expansion, the system bears several weaknesses. The Indian higher education system in many parts of the country is in the state of disrepair. In almost half the districts in the country higher education enrollment are very low. Almost two third of total universities and 90% of colleges are rated below average on quality parameters. This can be attributed to the under prepared faculty, unwieldy governance and other obstacles to innovation and improvement that could prohibit India from meeting its national education goals. The opportunities in Indian higher education system are widely ranged. The national institutions are training their products to compete at global level and make them capable to grab opportunities worldwide. The state universities and colleges with their limited resources are giving the products that are capable enough to secure career opportunities and hold responsible positions in various government and private sectors with in the country. This is further creating opportunities for the weaker section of the society to join the main stream. There are several factors which can be defined as threats to Indian higher education system. It is a matter of great concern and needs proper attention. Some important factors are -Conservative society, particularly for women education; -Lack of transparency, -Taking higher education as a means of businessKeywords: Indian higher education system, SWOT analysis, university grants commission, Indian institutes of technology
Procedia PDF Downloads 898257 Small Town Big Urban Issues the Case of Kiryat Ono, Israel
Authors: Ruth Shapira
Abstract:
Introduction: The rapid urbanization of the last century confronts planners, regulatory bodies, developers and most of all – the public with seemingly unsolved conflicts regarding values, capital, and wellbeing of the built and un-built urban space. This is reflected in the quality of the urban form and life which has known no significant progress in the last 2-3 decades despite the on-growing urban population. It is the objective of this paper to analyze some of these fundamental issues through the case study of a relatively small town in the center of Israel (Kiryat-Ono, 100,000 inhabitants), unfold the deep structure of qualities versus disruptors, present some cure that we have developed to bridge over and humbly suggest a practice that may be generic for similar cases. Basic Methodologies: The OBJECT, the town of Kiryat Ono, shall be experimented upon in a series of four action processes: De-composition, Re-composition, the Centering process and, finally, Controlled Structural Disintegration. Each stage will be based on facts, analysis of previous multidisciplinary interventions on various layers – and the inevitable reaction of the OBJECT, leading to the conclusion based on innovative theoretical and practical methods that we have developed and that we believe are proper for the open ended network, setting the rules for the contemporary urban society to cluster by. The Study: Kiryat Ono, was founded 70 years ago as an agricultural settlement and rapidly turned into an urban entity. In spite the massive intensification, the original DNA of the old small town was still deeply embedded, mostly in the quality of the public space and in the sense of clustered communities. In the past 20 years, the recent demand for housing has been addressed to on the national level with recent master plans and urban regeneration policies mostly encouraging individual economic initiatives. Unfortunately, due to the obsolete existing planning platform the present urban renewal is characterized by pressure of developers, a dramatic change in building scale and widespread disintegration of the existing urban and social tissue. Our office was commissioned to conceptualize two master plans for the two contradictory processes of Kiryat Ono’s future: intensification and conservation. Following a comprehensive investigation into the deep structures and qualities of the existing town, we developed a new vocabulary of conservation terms thus redefying the sense of PLACE. The main challenge was to create master plans that should offer a regulatory basis to the accelerated and sporadic development providing for the public good and preserving the characteristics of the PLACE consisting of a tool box of design guidelines that will have the ability to reorganize space along the time axis in a coherent way. In Conclusion: The system of rules that we have developed can generate endless possible patterns making sure that at each implementation fragment an event is created, and a better place is revealed. It takes time and perseverance but it seems to be the way to provide a healthy framework for the accelerated urbanization of our chaotic present.Keywords: housing, architecture, urban qualities, urban regeneration, conservation, intensification
Procedia PDF Downloads 361256 E-Waste Generation in Bangladesh: Present and Future Estimation by Material Flow Analysis Method
Authors: Rowshan Mamtaz, Shuvo Ahmed, Imran Noor, Sumaiya Rahman, Prithvi Shams, Fahmida Gulshan
Abstract:
Last few decades have witnessed a phenomenal rise in the use of electrical and electronic equipment globally in our everyday life. As these items reach the end of their lifecycle, they turn into e-wastes and contribute to the waste stream. Bangladesh, in conformity with the global trend and due to its ongoing rapid growth, is also using electronics-based appliances and equipment at an increasing rate. This has caused a corresponding increase in the generation of e-wastes. Bangladesh is a developing country; its overall waste management system, is not yet efficient, nor is it environmentally sustainable. Most of its solid wastes are disposed of in a crude way at dumping sites. Addition of e-wastes, which often contain toxic heavy metals, into its waste stream has made the situation more difficult and challenging. Assessment of generation of e-wastes is an important step towards addressing the challenges posed by e-wastes, setting targets, and identifying the best practices for their management. Understanding and proper management of e-wastes is a stated item of the Sustainable Development Goals (SDG) campaign, and Bangladesh is committed to fulfilling it. A better understanding and availability of reliable baseline data on e-wastes will help in preventing illegal dumping, promote recycling, and create jobs in the recycling sectors and thus facilitate sustainable e-waste management. With this objective in mind, the present study has attempted to estimate the amount of e-wastes and its future generation trend in Bangladesh. To achieve this, sales data on eight selected electrical and electronic products (TV, Refrigerator, Fan, Mobile phone, Computer, IT equipment, CFL (Compact Fluorescent Lamp) bulbs, and Air Conditioner) have been collected from different sources. Primary and secondary data on the collection, recycling, and disposal of the e-wastes have also been gathered by questionnaire survey, field visits, interviews, and formal and informal meetings with the stakeholders. Material Flow Analysis (MFA) method has been applied, and mathematical models have been developed in the present study to estimate e-waste amounts and their future trends up to the year 2035 for the eight selected electrical and electronic equipment. End of life (EOL) method is adopted in the estimation. Model inputs are products’ annual sale/import data, past and future sales data, and average life span. From the model outputs, it is estimated that the generation of e-wastes in Bangladesh in 2018 is 0.40 million tons and by 2035 the amount will be 4.62 million tons with an average annual growth rate of 20%. Among the eight selected products, the number of e-wastes generated from seven products are increasing whereas only one product, CFL bulb, showed a decreasing trend of waste generation. The average growth rate of e-waste from TV sets is the highest (28%) while those from Fans and IT equipment are the lowest (11%). Field surveys conducted in the e-waste recycling sector also revealed that every year around 0.0133 million tons of e-wastes enter into the recycling business in Bangladesh which may increase in the near future.Keywords: Bangladesh, end of life, e-waste, material flow analysis
Procedia PDF Downloads 198255 Intervention To Prevent Infections And Reinfections With Intestinal Parasites In People Living With Human Immunodeficiency Virus In Some Parts Of Eastern Cape, South Africa
Authors: Ifeoma Anozie, Teka Apalata, Dominic Abaver
Abstract:
Introduction: Despite use of Anti-retroviral therapy to reduce the incidence of opportunistic infections among HIV/AIDS patients, rapid episodes of re-infection after deworming are still common occurrences because pharmaceutical intervention alone does not prevent reinfection. Unsafe water and inadequate personal hygiene and parasitic infections are widely expected to accelerate the progression of HIV infection. This is because the chronic immunosuppression of HIV infection encourages susceptibility to opportunistic (including parasitic) infections which is linked to CD4+ cell count of <200 cells/μl. Intestinal parasites such as G. intestinalis and Entamoeba spp are ubiquitous protozoa that remain infectious over a long time in an environment and show resistance to standard disinfection. To control re-infection, the social factors that underpin the prevention need to be controlled. This study aims at prevention of intestinal parasites in people living with HIV/AIDS by using a treatment, hygiene education and sanitation (THEdS) bundle approach. Methods: This study was conducted in four clinics (Ngangelizwe health centre, Tsolo gateway clinic, Idutywa health centre and Nqamakwe health centre) across the seven districts in Eastern cape, South Africa. The four clinics were divided in two: experimental and control, for the purpose of intervention. Data was collected from March 2019 to February 2020. Six hundred participants were screened for intestinal parasitic infections. Stool samples were collected and analysed twice: before (Pre-test infection screening) and after (Post-test re-infection) THEdS bundle intervention. The experimental clinics received full intervention package, which include therapeutic treatment, health education on personal hygiene and sanitation training, while the control clinics received only therapeutic treatment for those found with intestinal parasitic infections. Results: Baseline prevalence of Intestinal Parasites isolated shows 12 intestinal parasites with overall frequency of 65, with Ascaris lumbricoides having most frequency (44.6%). The intervention had a cure rate of 60%, with odd ratio of 1.42, which indicates that the intervention group is 1.42 times more likely of parasite clearing as compared to the control group. The relative risk ratio of 1.17 signifies that there is 1.17 times more likelihood to clear intestinal parasite if there no intervention. Discussion and conclusion: Infection with multiple parasites can cause health defects, especially among HIV/AIDS patients. Efficiency of some HIV vaccines in HIV/AIDS patients is affected because treatment of re-infection amplifies drug resistance, affects the efficacy of the front-line drugs, and still permits transmission. In South Africa, treatment of intestinal parasites is usually offered to clinic attending HIV/AIDS patients upon suspicion but not as a mandate for patients being initiated into Antiretroviral (ART) program. The effectiveness of THEdS bundle advocates for inclusiveness of mandatory screening for intestinal parasitic infections among attendees of HIV/Aids clinics on regular basis.Keywords: cure rate, , HIV/AIDS patients, intestinal parasites, intervention studies, reinfection rate
Procedia PDF Downloads 76254 The Biosphere as a Supercomputer Directing and Controlling Evolutionary Processes
Authors: Igor A. Krichtafovitch
Abstract:
The evolutionary processes are not linear. Long periods of quiet and slow development turn to rather rapid emergences of new species and even phyla. During Cambrian explosion, 22 new phyla were added to the previously existed 3 phyla. Contrary to the common credence the natural selection or a survival of the fittest cannot be accounted for the dominant evolution vector which is steady and accelerated advent of more complex and more intelligent living organisms. Neither Darwinism nor alternative concepts including panspermia and intelligent design propose a satisfactory solution for these phenomena. The proposed hypothesis offers a logical and plausible explanation of the evolutionary processes in general. It is based on two postulates: a) the Biosphere is a single living organism, all parts of which are interconnected, and b) the Biosphere acts as a giant biological supercomputer, storing and processing the information in digital and analog forms. Such supercomputer surpasses all human-made computers by many orders of magnitude. Living organisms are the product of intelligent creative action of the biosphere supercomputer. The biological evolution is driven by growing amount of information stored in the living organisms and increasing complexity of the biosphere as a single organism. Main evolutionary vector is not a survival of the fittest but an accelerated growth of the computational complexity of the living organisms. The following postulates may summarize the proposed hypothesis: biological evolution as a natural life origin and development is a reality. Evolution is a coordinated and controlled process. One of evolution’s main development vectors is a growing computational complexity of the living organisms and the biosphere’s intelligence. The intelligent matter which conducts and controls global evolution is a gigantic bio-computer combining all living organisms on Earth. The information is acting like a software stored in and controlled by the biosphere. Random mutations trigger this software, as is stipulated by Darwinian Evolution Theories, and it is further stimulated by the growing demand for the Biosphere’s global memory storage and computational complexity. Greater memory volume requires a greater number and more intellectually advanced organisms for storing and handling it. More intricate organisms require the greater computational complexity of biosphere in order to keep control over the living world. This is an endless recursive endeavor with accelerated evolutionary dynamic. New species emerge when two conditions are met: a) crucial environmental changes occur and/or global memory storage volume comes to its limit and b) biosphere computational complexity reaches critical mass capable of producing more advanced creatures. The hypothesis presented here is a naturalistic concept of life creation and evolution. The hypothesis logically resolves many puzzling problems with the current state evolution theory such as speciation, as a result of GM purposeful design, evolution development vector, as a need for growing global intelligence, punctuated equilibrium, happening when two above conditions a) and b) are met, the Cambrian explosion, mass extinctions, happening when more intelligent species should replace outdated creatures.Keywords: supercomputer, biological evolution, Darwinism, speciation
Procedia PDF Downloads 164253 Liposome Loaded Polysaccharide Based Hydrogels: Promising Delayed Release Biomaterials
Authors: J. Desbrieres, M. Popa, C. Peptu, S. Bacaita
Abstract:
Because of their favorable properties (non-toxicity, biodegradability, mucoadhesivity etc.), polysaccharides were studied as biomaterials and as pharmaceutical excipients in drug formulations. These formulations may be produced in a wide variety of forms including hydrogels, hydrogel based particles (or capsules), films etc. In these formulations, the polysaccharide based materials are able to provide local delivery of loaded therapeutic agents but their delivery can be rapid and not easily time-controllable due to, particularly, the burst effect. This leads to a loss in drug efficiency and lifetime. To overcome the consequences of burst effect, systems involving liposomes incorporated into polysaccharide hydrogels may appear as a promising material in tissue engineering, regenerative medicine and drug loading systems. Liposomes are spherical self-closed structures, composed of curved lipid bilayers, which enclose part of the surrounding solvent into their structure. The simplicity of production, their biocompatibility, the size and similar composition of cells, the possibility of size adjustment for specific applications, the ability of hydrophilic or/and hydrophobic drug loading make them a revolutionary tool in nanomedicine and biomedical domain. Drug delivery systems were developed as hydrogels containing chitosan or carboxymethylcellulose (CMC) as polysaccharides and gelatin (GEL) as polypeptide, and phosphatidylcholine or phosphatidylcholine/cholesterol liposomes able to accurately control this delivery, without any burst effect. Hydrogels based on CMC were covalently crosslinked using glutaraldehyde, whereas chitosan based hydrogels were double crosslinked (ionically using sodium tripolyphosphate or sodium sulphate and covalently using glutaraldehyde). It has been proven that the liposome integrity is highly protected during the crosslinking procedure for the formation of the film network. Calcein was used as model active matter for delivery experiments. Multi-Lamellar vesicles (MLV) and Small Uni-Lamellar Vesicles (SUV) were prepared and compared. The liposomes are well distributed throughout the whole area of the film, and the vesicle distribution is equivalent (for both types of liposomes evaluated) on the film surface as well as deeper (100 microns) in the film matrix. An obvious decrease of the burst effect was observed in presence of liposomes as well as a uniform increase of calcein release that continues even at large time scales. Liposomes act as an extra barrier for calcein release. Systems containing MLVs release higher amounts of calcein compared to systems containing SUVs, although these liposomes are more stable in the matrix and diffuse with difficulty. This difference comes from the higher quantity of calcein present within the MLV in relation with their size. Modeling of release kinetics curves was performed and the release of hydrophilic drugs may be described by a multi-scale mechanism characterized by four distinct phases, each of them being characterized by a different kinetics model (Higuchi equation, Korsmeyer-Peppas model etc.). Knowledge of such models will be a very interesting tool for designing new formulations for tissue engineering, regenerative medicine and drug delivery systems.Keywords: controlled and delayed release, hydrogels, liposomes, polysaccharides
Procedia PDF Downloads 226