Search results for: abnormal trading volume activity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9324

Search results for: abnormal trading volume activity

54 An Elasto-Viscoplastic Constitutive Model for Unsaturated Soils: Numerical Implementation and Validation

Authors: Maria Lazari, Lorenzo Sanavia

Abstract:

Mechanics of unsaturated soils has been an active field of research in the last decades. Efficient constitutive models that take into account the partial saturation of soil are necessary to solve a number of engineering problems e.g. instability of slopes and cuts due to heavy rainfalls. A large number of constitutive models can now be found in the literature that considers fundamental issues associated with the unsaturated soil behaviour, like the volume change and shear strength behaviour with suction or saturation changes. Partially saturated soils may either expand or collapse upon wetting depending on the stress level, and it is also possible that a soil might experience a reversal in the volumetric behaviour during wetting. Shear strength of soils also changes dramatically with changes in the degree of saturation, and a related engineering problem is slope failures caused by rainfall. There are several states of the art reviews over the last years for studying the topic, usually providing a thorough discussion of the stress state, the advantages, and disadvantages of specific constitutive models as well as the latest developments in the area of unsaturated soil modelling. However, only a few studies focused on the coupling between partial saturation states and time effects on the behaviour of geomaterials. Rate dependency is experimentally observed in the mechanical response of granular materials, and a viscoplastic constitutive model is capable of reproducing creep and relaxation processes. Therefore, in this work an elasto-viscoplastic constitutive model for unsaturated soils is proposed and validated on the basis of experimental data. The model constitutes an extension of an existing elastoplastic strain-hardening constitutive model capable of capturing the behaviour of variably saturated soils, based on energy conjugated stress variables in the framework of superposed continua. The purpose was to develop a model able to deal with possible mechanical instabilities within a consistent energy framework. The model shares the same conceptual structure of the elastoplastic laws proposed to deal with bonded geomaterials subject to weathering or diagenesis and is capable of modelling several kinds of instabilities induced by the loss of hydraulic bonding contributions. The novelty of the proposed formulation is enhanced with the incorporation of density dependent stiffness and hardening coefficients in order to allow the modeling of the pycnotropy behaviour of granular materials with a single set of material constants. The model has been implemented in the commercial FE platform PLAXIS, widely used in Europe for advanced geotechnical design. The algorithmic strategies adopted for the stress-point algorithm had to be revised to take into account the different approach adopted by PLAXIS developers in the solution of the discrete non-linear equilibrium equations. An extensive comparison between models with a series of experimental data reported by different authors is presented to validate the model and illustrate the capability of the newly developed model. After the validation, the effectiveness of the viscoplastic model is displayed by numerical simulations of a partially saturated slope failure of the laboratory scale and the effect of viscosity and degree of saturation on slope’s stability is discussed.

Keywords: PLAXIS software, slope, unsaturated soils, Viscoplasticity

Procedia PDF Downloads 225
53 Awareness Creation of Benefits of Antitrypsin-Free Nutraceutical Biopowder for Increasing Human Serum Albumin Synthesis as Possible Adjunct for Management of MDRTB or MDRTB-HIV Patients

Authors: Vincent Oghenekevbe Olughor, Olusoji Mayowa Ige

Abstract:

Except for a preexisting liver disease and malnutrition, there are no predilections for low serum albumin (SA) levels in humans. At normal reference levels (4.0-6.0g/dl) SA is a universal marker for mortality and morbidity risks assessments where depletion by 1.0g/dl increases mortality risk by 137% and morbidity by 89%.It has 40 known functions contributing significantly to the sustenance of human life. A depletion in SA to <2.2g/dl, in most clinical settings worldwide, leads to loss of oncotic pressure of blood causing clinical manifestations of bipedal Oedema, in which the patients remain conscious. SA also contributes significantly to buffering of blood to a life-sustaining pH of 7.35-7.45. A drop in blood pH to <6.9 will lead to instant coma and death, which can occur after SA continues to deplete after manifestations of bipedal Oedema. In an intervention study conducted in 2014 following the discovery that “SA is depleted during malaria fever”, a Nutraceutical formulated for use as treatment adjunct to prevent SA depletions during malaria to <2.4g/dl after Efficacy testing was found to be satisfactory. There are five known types of Malaria caused by Apicomplexan parasites, Plasmodium: the most lethal being that caused by Plasmodium falciparum causing malignant tertian malaria, in which the fever was occurring every 48 hours coincides with the dumping of malaria-toxins (Hemozoin) into blood, causing contamination: blood must remain sterile. Other Apicomplexan parasites, Toxoplasma and Cryptosporidium, are opportunistic infections of HIV. Separate studies showed SA depletions in MDRTB (multidrug resistant TB), and MDRTB-HIV patients by the same mechanism discovered with malaria and such depletions will be further complicated whenever Apicomplexan parasitic infections co-exist. Both Apicomplexan parasites and the TB parasite belong to the Obligate-group of Parasites, which are parasites that replicate only inside its host; and most of them have capacities to over-consume host nutrients during parasitaemia. In MDRTB patients the body attempts repeatedly to prevent depletions in SA to critical levels in the presence of adequate nutrients and only for a while in MDRTB-HIV patients. These groups of patients will, therefore, benefit from the already tested Nutraceutical in malaria patients. The Nutraceutical bio-Powder was formulated (to BP 1988 specification) from twelve nature-based food-grade nutrients containing all dedicated nutrients for ensuring improved synthesis of Albumin by the liver. The Nutraceutical was administered daily for 38±2days in 23 children, in a prospective phase-2 clinical trial, and its impact on body weight and core blood parameters were documented at the start and end of efficacy testing period. Sixteen children who did not experience malaria-induced depletions of SA had significant SA increase; seven children who experienced malaria-induced depletions of SA had insignificant SA decrease. The Packed Cell Volume Percentage (PCV %), a measure of the Oxygen carrying capacity of blood and the amount of nutrients the body can absorb, increased in both groups. The total serum proteins (SA+ Globulins) increased or decreased within the continuum of normal. In conclusion, MDRTB and MDRTB-HIV patients will benefit from a variant of this Nutraceutical when used as treatment adjunct.

Keywords: antitrypsin-free Nutraceutical, apicomplexan parasites, no predilections for low serum albumin, toxoplasmosis

Procedia PDF Downloads 289
52 Best Practices and Recommendations for CFD Simulation of Hydraulic Spool Valves

Authors: Jérémy Philippe, Lucien Baldas, Batoul Attar, Jean-Charles Mare

Abstract:

The proposed communication deals with the research and development of a rotary direct-drive servo valve for aerospace applications. A key challenge of the project is to downsize the electromagnetic torque motor by reducing the torque required to drive the rotary spool. It is intended to optimize the spool and the sleeve geometries by combining a Computational Fluid Dynamics (CFD) approach with commercial optimization software. The present communication addresses an important phase of the project, which consists firstly of gaining confidence in the simulation results. It is well known that the force needed to pilot a sliding spool valve comes from several physical effects: hydraulic forces, friction and inertia/mass of the moving assembly. Among them, the flow force is usually a major contributor to the steady-state (or Root Mean Square) driving torque. In recent decades, CFD has gradually become a standard simulation tool for studying fluid-structure interactions. However, in the particular case of high-pressure valve design, the authors have experienced that the calculated overall hydraulic force depends on the parameterization and options used to build and run the CFD model. To solve this issue, the authors have selected the standard case of the linear spool valve, which is addressed in detail in numerous scientific references (analytical models, experiments, CFD simulations). The first CFD simulations run by the authors have shown that the evolution of the equivalent discharge coefficient vs. Reynolds number at the metering orifice corresponds well to the values that can be predicted by the classical analytical models. Oppositely, the simulated flow force was found to be quite different from the value calculated analytically. This drove the authors to investigate minutely the influence of the studied domain and the setting of the CFD simulation. It was firstly shown that the flow recirculates in the inlet and outlet channels if their length is not sufficient regarding their hydraulic diameter. The dead volume on the uncontrolled orifice side also plays a significant role. These examples highlight the influence of the geometry of the fluid domain considered. The second action was to investigate the influence of the type of mesh, the turbulence models and near-wall approaches, and the numerical solver and discretization scheme order. Two approaches were used to determine the overall hydraulic force acting on the moving spool. First, the force was deduced from the momentum balance on a control domain delimited by the valve inlet and outlet and the spool walls. Second, the overall hydraulic force was calculated from the integral of pressure and shear forces acting at the boundaries of the fluid domain. This underlined the significant contribution of the viscous forces acting on the spool between the inlet and outlet orifices, which are generally not considered in the literature. This also emphasized the influence of the choices made for the implementation of CFD calculation and results analysis. With the step-by-step process adopted to increase confidence in the CFD simulations, the authors propose a set of best practices and recommendations for the efficient use of CFD to design high-pressure spool valves.

Keywords: computational fluid dynamics, hydraulic forces, servovalve, rotary servovalve

Procedia PDF Downloads 45
51 Wear Resistance in Dry and Lubricated Conditions of Hard-anodized EN AW-4006 Aluminum Alloy

Authors: C. Soffritti, A. Fortini, E. Baroni, M. Merlin, G. L. Garagnani

Abstract:

Aluminum alloys are widely used in many engineering applications due to their advantages such ashigh electrical and thermal conductivities, low density, high strength to weight ratio, and good corrosion resistance. However, their low hardness and poor tribological properties still limit their use in industrial fields requiring sliding contacts. Hard anodizing is one of the most common solution for overcoming issues concerning the insufficient friction resistance of aluminum alloys. In this work, the tribological behavior ofhard-anodized AW-4006 aluminum alloys in dry and lubricated conditions was evaluated. Three different hard-anodizing treatments were selected: a conventional one (HA) and two innovative golden hard-anodizing treatments (named G and GP, respectively), which involve the sealing of the porosity of anodic aluminum oxides (AAO) with silver ions at different temperatures. Before wear tests, all AAO layers were characterized by scanning electron microscopy (VPSEM/EDS), X-ray diffractometry, roughness (Ra and Rz), microhardness (HV0.01), nanoindentation, and scratch tests. Wear tests were carried out according to the ASTM G99-17 standard using a ball-on-disc tribometer. The tests were performed in triplicate under a 2 Hz constant frequency oscillatory motion, a maximum linear speed of 0.1 m/s, normal loads of 5, 10, and 15 N, and a sliding distance of 200 m. A 100Cr6 steel ball10 mm in diameter was used as counterpart material. All tests were conducted at room temperature, in dry and lubricated conditions. Considering the more recent regulations about the environmental hazard, four bio-lubricants were considered after assessing their chemical composition (in terms of Unsaturation Number, UN) and viscosity: olive, peanut, sunflower, and soybean oils. The friction coefficient was provided by the equipment. The wear rate of anodized surfaces was evaluated by measuring the cross-section area of the wear track with a non-contact 3D profilometer. Each area value, obtained as an average of four measurements of cross-section areas along the track, was used to determine the wear volume. The worn surfaces were analyzed by VPSEM/EDS. Finally, in agreement with DoE methodology, a statistical analysis was carried out to identify the most influencing factors on the friction coefficients and wear rates. In all conditions, results show that the friction coefficient increased with raising the normal load. Considering the wear tests in dry sliding conditions, irrespective of the type of anodizing treatments, metal transfer between the mating materials was observed over the anodic aluminum oxides. During sliding at higher loads, the detachment of the metallic film also caused the delamination of some regions of the wear track. For the wear tests in lubricated conditions, the natural oils with high percentages of oleic acid (i.e., olive and peanut oils) maintained high friction coefficients and low wear rates. Irrespective of the type of oil, smallmicrocraks were visible over the AAO layers. Based on the statistical analysis, the type of anodizing treatment and magnitude of applied load were the main factors of influence on the friction coefficient and wear rate values. Nevertheless, an interaction between bio-lubricants and load magnitude could occur during the tests.

Keywords: hard anodizing treatment, silver ions, bio-lubricants, sliding wear, statistical analysis

Procedia PDF Downloads 151
50 Big Data Applications for the Transport Sector

Authors: Antonella Falanga, Armando Cartenì

Abstract:

Today, an unprecedented amount of data coming from several sources, including mobile devices, sensors, tracking systems, and online platforms, characterizes our lives. The term “big data” not only refers to the quantity of data but also to the variety and speed of data generation. These data hold valuable insights that, when extracted and analyzed, facilitate informed decision-making. The 4Vs of big data - velocity, volume, variety, and value - highlight essential aspects, showcasing the rapid generation, vast quantities, diverse sources, and potential value addition of these kinds of data. This surge of information has revolutionized many sectors, such as business for improving decision-making processes, healthcare for clinical record analysis and medical research, education for enhancing teaching methodologies, agriculture for optimizing crop management, finance for risk assessment and fraud detection, media and entertainment for personalized content recommendations, emergency for a real-time response during crisis/events, and also mobility for the urban planning and for the design/management of public and private transport services. Big data's pervasive impact enhances societal aspects, elevating the quality of life, service efficiency, and problem-solving capacities. However, during this transformative era, new challenges arise, including data quality, privacy, data security, cybersecurity, interoperability, the need for advanced infrastructures, and staff training. Within the transportation sector (the one investigated in this research), applications span planning, designing, and managing systems and mobility services. Among the most common big data applications within the transport sector are, for example, real-time traffic monitoring, bus/freight vehicle route optimization, vehicle maintenance, road safety and all the autonomous and connected vehicles applications. Benefits include a reduction in travel times, road accidents and pollutant emissions. Within these issues, the proper transport demand estimation is crucial for sustainable transportation planning. Evaluating the impact of sustainable mobility policies starts with a quantitative analysis of travel demand. Achieving transportation decarbonization goals hinges on precise estimations of demand for individual transport modes. Emerging technologies, offering substantial big data at lower costs than traditional methods, play a pivotal role in this context. Starting from these considerations, this study explores the usefulness impact of big data within transport demand estimation. This research focuses on leveraging (big) data collected during the COVID-19 pandemic to estimate the evolution of the mobility demand in Italy. Estimation results reveal in the post-COVID-19 era, more than 96 million national daily trips, about 2.6 trips per capita, with a mobile population of more than 37.6 million Italian travelers per day. Overall, this research allows us to conclude that big data better enhances rational decision-making for mobility demand estimation, which is imperative for adeptly planning and allocating investments in transportation infrastructures and services.

Keywords: big data, cloud computing, decision-making, mobility demand, transportation

Procedia PDF Downloads 65
49 A Simple Chemical Approach to Regenerating Strength of Thermally Recycled Glass Fibre

Authors: Sairah Bashir, Liu Yang, John Liggat, James Thomason

Abstract:

Glass fibre is currently used as reinforcement in over 90% of all fibre-reinforced composites produced. The high rigidity and chemical resistance of these composites are required for optimum performance but unfortunately results in poor recyclability; when such materials are no longer fit for purpose, they are frequently deposited in landfill sites. Recycling technologies, for example, thermal treatment, can be employed to address this issue; temperatures typically between 450 and 600 °C are required to allow degradation of the rigid polymeric matrix and subsequent extraction of fibrous reinforcement. However, due to the severe thermal conditions utilised in the recycling procedure, glass fibres become too weak for reprocessing in second-life composite materials. In addition, more stringent legislation is being put in place regarding disposal of composite waste, and so it is becoming increasingly important to develop long-term recycling solutions for such materials. In particular, the development of a cost-effective method to regenerate strength of thermally recycled glass fibres will have a positive environmental effect as a reduced volume of composite material will be destined for landfill. This research study has demonstrated the positive impact of sodium hydroxide (NaOH) and potassium hydroxide (KOH) solution, prepared at relatively mild temperatures and at concentrations of 1.5 M and above, on the strength of heat-treated glass fibres. As a result, alkaline treatments can potentially be implemented to glass fibres that are recycled from composite waste to allow their reuse in second-life materials. The optimisation of the strength recovery process is being conducted by varying certain reaction parameters such as molarity of alkaline solution and treatment time. It is believed that deep V-shaped surface flaws exist commonly on severely damaged fibre surfaces and are effectively removed to form smooth, U-shaped structures following alkaline treatment. Although these surface flaws are believed to be present on glass fibres they have not in fact been observed, however, they have recently been discovered in this research investigation through analytical techniques such as AFM (atomic force microscopy) and SEM (scanning electron microscopy). Reaction conditions such as molarity of alkaline solution affect the degree of etching of the glass fibre surface, and therefore the extent to which fibre strength is recovered. A novel method in determining the etching rate of glass fibres after alkaline treatment has been developed, and the data acquired can be correlated with strength. By varying reaction conditions such as alkaline solution temperature and molarity, the activation energy of the glass etching process and the reaction order can be calculated respectively. The promising results obtained from NaOH and KOH treatments have opened an exciting route to strength regeneration of thermally recycled glass fibres, and the optimisation of the alkaline treatment process is being continued in order to produce recycled fibres with properties that match original glass fibre products. The reuse of such glass filaments indicates that closed-loop recycling of glass fibre reinforced composite (GFRC) waste can be achieved. In fact, the development of a closed-loop recycling process for GFRC waste is already underway in this research study.

Keywords: glass fibers, glass strengthening, glass structure and properties, surface reactions and corrosion

Procedia PDF Downloads 257
48 Runoff Estimates of Rapidly Urbanizing Indian Cities: An Integrated Modeling Approach

Authors: Rupesh S. Gundewar, Kanchan C. Khare

Abstract:

Runoff contribution from urban areas is generally from manmade structures and few natural contributors. The manmade structures are buildings; roads and other paved areas whereas natural contributors are groundwater and overland flows etc. Runoff alleviation is done by manmade as well as natural storages. Manmade storages are storage tanks or other storage structures such as soakways or soak pits which are more common in western and European countries. Natural storages are catchment slope, infiltration, catchment length, channel rerouting, drainage density, depression storage etc. A literature survey on the manmade and natural storages/inflow has presented percentage contribution of each individually. Sanders et.al. in their research have reported that a vegetation canopy reduces runoff by 7% to 12%. Nassif et el in their research have reported that catchment slope has an impact of 16% on bare standard soil and 24% on grassed soil on rainfall runoff. Infiltration being a pervious/impervious ratio dependent parameter is catchment specific. But a literature survey has presented a range of 15% to 30% loss of rainfall runoff in various catchment study areas. Catchment length and channel rerouting too play a considerable role in reduction of rainfall runoff. Ground infiltration inflow adds to the runoff where the groundwater table is very shallow and soil saturates even in a lower intensity storm. An approximate percent contribution through this inflow and surface inflow contributes to about 2% of total runoff volume. Considering the various contributing factors in runoff it has been observed during a literature survey that integrated modelling approach needs to be considered. The traditional storm water network models are able to predict to a fair/acceptable degree of accuracy provided no interaction with receiving water (river, sea, canal etc), ground infiltration, treatment works etc. are assumed. When such interactions are significant then it becomes difficult to reproduce the actual flood extent using the traditional discrete modelling approach. As a result the correct flooding situation is very rarely addressed accurately. Since the development of spatially distributed hydrologic model the predictions have become more accurate at the cost of requiring more accurate spatial information.The integrated approach provides a greater understanding of performance of the entire catchment. It enables to identify the source of flow in the system, understand how it is conveyed and also its impact on the receiving body. It also confirms important pain points, hydraulic controls and the source of flooding which could not be easily understood with discrete modelling approach. This also enables the decision makers to identify solutions which can be spread throughout the catchment rather than being concentrated at single point where the problem exists. Thus it can be concluded from the literature survey that the representation of urban details can be a key differentiator to the successful understanding of flooding issue. The intent of this study is to accurately predict the runoff from impermeable areas from urban area in India. A representative area has been selected for which data was available and predictions have been made which are corroborated with the actual measured data.

Keywords: runoff, urbanization, impermeable response, flooding

Procedia PDF Downloads 250
47 Exploring Closed-Loop Business Systems Which Eliminates Solid Waste in the Textile and Fashion Industry: A Systematic Literature Review Covering the Developments Occurred in the Last Decade

Authors: Bukra Kalayci, Geraldine Brennan

Abstract:

Introduction: Over the last decade, a proliferation of literature related to textile and fashion business in the context of sustainable production and consumption has emerged. However, the economic and environmental benefits of solid waste recovery have not been comprehensively searched. Therefore at the end-of-life or end-of-use textile waste management remains a gap. Solid textile waste reuse and recycling principles of the circular economy need to be developed to close the disposal stage of the textile supply chain. The environmental problems associated with the over-production and –consumption of textile products arise. Together with growing population and fast fashion culture the share of solid textile waste in municipal waste is increasing. Focusing on post-consumer textile waste literature, this research explores the opportunities, obstacles and enablers or success factors associated with closed-loop textile business systems. Methodology: A systematic literature review was conducted in order to identify best practices and gaps from the existing body of knowledge related to closed-loop post-consumer textile waste initiatives over the last decade. Selected keywords namely: ‘cradle-to-cradle ‘, ‘circular* economy* ‘, ‘closed-loop* ‘, ‘end-of-life* ‘, ‘reverse* logistic* ‘, ‘take-back* ‘, ‘remanufacture* ‘, ‘upcycle* ‘ with the combination of (and) ‘fashion* ‘, ‘garment* ‘, ‘textile* ‘, ‘apparel* ‘, clothing* ‘ were used and the time frame of the review was set between 2005 to 2017. In order to obtain a broad coverage, Web of Knowledge and Science Direct databases were used, and peer-reviewed journal articles were chosen. The keyword search identified 299 number of papers which was further refined into 54 relevant papers that form the basis of the in-depth thematic analysis. Preliminary findings: A key finding was that the existing literature is predominantly conceptual rather than applied or empirical work. Moreover, the enablers or success factors, obstacles and opportunities to implement closed-loop systems in the textile industry were not clearly articulated and the following considerations were also largely overlooked in the literature. While the circular economy suggests multiple cycles of discarded products, components or materials, most research has to date tended to focus on a single cycle. Thus the calculations of environmental and economic benefits of closed-loop systems are limited to one cycle which does not adequately explore the feasibility or potential benefits of multiple cycles. Additionally, the time period textile products spend between point of sale, and end-of-use/end-of-life return is a crucial factor. Despite past efforts to study closed-loop textile systems a clear gap in the literature is the lack of a clear evaluation framework which enables manufacturers to clarify the reusability potential of textile products through consideration of indicators related too: quality, design, lifetime, length of time between manufacture and product return, volume of collected disposed products, material properties, and brand segment considerations (e.g. fast fashion versus luxury brands).

Keywords: circular fashion, closed loop business, product service systems, solid textile waste elimination

Procedia PDF Downloads 204
46 Investigation of Different Electrolyte Salts Effect on ZnO/MWCNT Anode Capacity in LIBs

Authors: Şeyma Dombaycıoğlu, Hilal Köse, Ali Osman Aydın, Hatem Akbulut

Abstract:

Rechargeable lithium ion batteries (LIBs) have been considered as one of the most attractive energy storage choices for laptop computers, electric vehicles and cellular phones owing to their high energy and power density. Compared with conventional carbonaceous materials, transition metal oxides (TMOs) have attracted great interests and stand out among versatile novel anode materials due to their high theoretical specific capacity, wide availability and good safety performance. ZnO, as an anode material for LIBs, has a high theoretical capacity of 978 mAh g-1, much higher than that of the conventional graphite anode (∼370 mAhg-1). However, several major problems such as poor cycleability, resulting from the severe volume expansion and contraction during the alloying-dealloying cycles with Li+ ions and the associated charge transfer process, the pulverization and the agglomeration of individual particles, which drastically reduces the total entrance/exit sites available for Li+ ions still hinder the practical use of ZnO powders as an anode material for LIBs. Therefore, a great deal of effort has been devoted to overcome these problems, and many methods have been developed. In most of these methods, it is claimed that carbon nanotubes (CNTs) will radically improve the performance of batteries, because their unique structure may especially enhance the kinetic properties of the electrodes and result in an extremely high specific charge compared with the theoretical limits of graphitic carbon. Due to outstanding properties of CNTs, MWCNT buckypaper substrate is considered a buffer material to prevent mechanical disintegration of anode material during the battery applications. As the bridge connecting the positive and negative electrodes, the electrolyte plays a critical role affecting the overall electrochemical performance of the cell including rate, capacity, durability and safety. Commercial electrolytes for Li-ion batteries normally consist of certain lithium salts and mixed organic linear and cyclic carbonate solvents. Most commonly, LiPF6 is attributed to its remarkable features including high solubility, good ionic conductivity, high dissociation constant and satisfactory electrochemical stability for commercial fabrication. Besides LiPF6, LiBF4 is well known as a conducting salt for LIBs. LiBF4 shows a better temperature stability in organic carbonate based solutions and less moisture sensitivity compared to LiPF6. In this work, free standing zinc oxide (ZnO) and multiwalled carbon nanotube (MWCNT) nanocomposite materials were prepared by a sol gel technique giving a high capacity anode material for lithium ion batteries. Electrolyte solutions (including 1 m Li+ ion) were prepared with different Li salts in glove box. For this purpose, LiPF6 and LiBF4 salts and also mixed of these salts were solved in EC:DMC solvents (1:1, w/w). CR2016 cells were assembled by using these prepared electrolyte solutions, the ZnO/MWCNT buckypaper nanocomposites as working electrodes, metallic lithium as cathode and polypropylene (PP) as separator. For investigating the effect of different Li salts on the electrochemical performance of ZnO/MWCNT nanocomposite anode material electrochemical tests were performed at room temperature.

Keywords: anode, electrolyte, Li-ion battery, ZnO/MWCNT

Procedia PDF Downloads 231
45 Degradation of Diclofenac in Water Using FeO-Based Catalytic Ozonation in a Modified Flotation Cell

Authors: Miguel A. Figueroa, José A. Lara-Ramos, Miguel A. Mueses

Abstract:

Pharmaceutical residues are a section of emerging contaminants of anthropogenic origin that are present in a myriad of waters with which human beings interact daily and are starting to affect the ecosystem directly. Conventional waste-water treatment systems are not capable of degrading these pharmaceutical effluents because their designs cannot handle the intermediate products and biological effects occurring during its treatment. That is why it is necessary to hybridize conventional waste-water systems with non-conventional processes. In the specific case of an ozonation process, its efficiency highly depends on a perfect dispersion of ozone, long times of interaction of the gas-liquid phases and the size of the ozone bubbles formed through-out the reaction system. In order to increase the efficiency of these parameters, the use of a modified flotation cell has been proposed recently as a reactive system, which is used at an industrial level to facilitate the suspension of particles and spreading gas bubbles through the reactor volume at a high rate. The objective of the present work is the development of a mathematical model that can closely predict the kinetic rates of reactions taking place in the flotation cell at an experimental scale by means of identifying proper reaction mechanisms that take into account the modified chemical and hydrodynamic factors in the FeO-catalyzed Ozonation of Diclofenac aqueous solutions in a flotation cell. The methodology is comprised of three steps: an experimental phase where a modified flotation cell reactor is used to analyze the effects of ozone concentration and loading catalyst over the degradation of Diclofenac aqueous solutions. The performance is evaluated through an index of utilized ozone, which relates the amount of ozone supplied to the system per milligram of degraded pollutant. Next, a theoretical phase where the reaction mechanisms taking place during the experiments must be identified and proposed that details the multiple direct and indirect reactions the system goes through. Finally, a kinetic model is obtained that can mathematically represent the reaction mechanisms with adjustable parameters that can be fitted to the experimental results and give the model a proper physical meaning. The expected results are a robust reaction rate law that can simulate the improved results of Diclofenac mineralization on water using the modified flotation cell reactor. By means of this methodology, the following results were obtained: A robust reaction pathways mechanism showcasing the intermediates, free-radicals and products of the reaction, Optimal values of reaction rate constants that simulated Hatta numbers lower than 3 for the system modeled, degradation percentages of 100%, TOC (Total organic carbon) removal percentage of 69.9 only requiring an optimal value of FeO catalyst of 0.3 g/L. These results showed that a flotation cell could be used as a reactor in ozonation, catalytic ozonation and photocatalytic ozonation processes, since it produces high reaction rate constants and reduces mass transfer limitations (Ha > 3) by producing microbubbles and maintaining a good catalyst distribution.

Keywords: advanced oxidation technologies, iron oxide, emergent contaminants, AOTS intensification

Procedia PDF Downloads 113
44 Big Data Applications for Transportation Planning

Authors: Antonella Falanga, Armando Cartenì

Abstract:

"Big data" refers to extremely vast and complex sets of data, encompassing extraordinarily large and intricate datasets that require specific tools for meaningful analysis and processing. These datasets can stem from diverse origins like sensors, mobile devices, online transactions, social media platforms, and more. The utilization of big data is pivotal, offering the chance to leverage vast information for substantial advantages across diverse fields, thereby enhancing comprehension, decision-making, efficiency, and fostering innovation in various domains. Big data, distinguished by its remarkable attributes of enormous volume, high velocity, diverse variety, and significant value, represent a transformative force reshaping the industry worldwide. Their pervasive impact continues to unlock new possibilities, driving innovation and advancements in technology, decision-making processes, and societal progress in an increasingly data-centric world. The use of these technologies is becoming more widespread, facilitating and accelerating operations that were once much more complicated. In particular, big data impacts across multiple sectors such as business and commerce, healthcare and science, finance, education, geography, agriculture, media and entertainment and also mobility and logistics. Within the transportation sector, which is the focus of this study, big data applications encompass a wide variety, spanning across optimization in vehicle routing, real-time traffic management and monitoring, logistics efficiency, reduction of travel times and congestion, enhancement of the overall transportation systems, but also mitigation of pollutant emissions contributing to environmental sustainability. Meanwhile, in public administration and the development of smart cities, big data aids in improving public services, urban planning, and decision-making processes, leading to more efficient and sustainable urban environments. Access to vast data reservoirs enables deeper insights, revealing hidden patterns and facilitating more precise and timely decision-making. Additionally, advancements in cloud computing and artificial intelligence (AI) have further amplified the potential of big data, enabling more sophisticated and comprehensive analyses. Certainly, utilizing big data presents various advantages but also entails several challenges regarding data privacy and security, ensuring data quality, managing and storing large volumes of data effectively, integrating data from diverse sources, the need for specialized skills to interpret analysis results, ethical considerations in data use, and evaluating costs against benefits. Addressing these difficulties requires well-structured strategies and policies to balance the benefits of big data with privacy, security, and efficient data management concerns. Building upon these premises, the current research investigates the efficacy and influence of big data by conducting an overview of the primary and recent implementations of big data in transportation systems. Overall, this research allows us to conclude that big data better provide to enhance rational decision-making for mobility choices and is imperative for adeptly planning and allocating investments in transportation infrastructures and services.

Keywords: big data, public transport, sustainable mobility, transport demand, transportation planning

Procedia PDF Downloads 61
43 Development of DEMO-FNS Hybrid Facility and Its Integration in Russian Nuclear Fuel Cycle

Authors: Yury S. Shpanskiy, Boris V. Kuteev

Abstract:

Development of a fusion-fission hybrid facility based on superconducting conventional tokamak DEMO-FNS runs in Russia since 2013. The main design goal is to reach the technical feasibility and outline prospects of industrial hybrid technologies providing the production of neutrons, fuel nuclides, tritium, high-temperature heat, electricity and subcritical transmutation in Fusion-Fission Hybrid Systems. The facility should operate in a steady-state mode at the fusion power of 40 MW and fission reactions of 400 MW. Major tokamak parameters are the following: major radius R=3.2 m, minor radius a=1.0 m, elongation 2.1, triangularity 0.5. The design provides the neutron wall loading of ~0.2 MW/m², the lifetime neutron fluence of ~2 MWa/m², with the surface area of the active cores and tritium breeding blanket ~100 m². Core plasma modelling showed that the neutron yield ~10¹⁹ n/s is maximal if the tritium/deuterium density ratio is 1.5-2.3. The design of the electromagnetic system (EMS) defined its basic parameters, accounting for the coils strength and stability, and identified the most problematic nodes in the toroidal field coils and the central solenoid. The EMS generates toroidal, poloidal and correcting magnetic fields necessary for the plasma shaping and confinement inside the vacuum vessel. EMC consists of eighteen superconducting toroidal field coils, eight poloidal field coils, five sections of a central solenoid, correction coils, in-vessel coils for vertical plasma control. Supporting structures, the thermal shield, and the cryostat maintain its operation. EMS operates with the pulse duration of up to 5000 hours at the plasma current up to 5 MA. The vacuum vessel (VV) is an all-welded two-layer toroidal shell placed inside the EMS. The free space between the vessel shells is filled with water and boron steel plates, which form the neutron protection of the EMS. The VV-volume is 265 m³, its mass with manifolds is 1800 tons. The nuclear blanket of DEMO-FNS facility was designed to provide functions of minor actinides transmutation, tritium production and enrichment of spent nuclear fuel. The vertical overloading of the subcritical active cores with MA was chosen as prospective. Analysis of the device neutronics and the hybrid blanket thermal-hydraulic characteristics has been performed for the system with functions covering transmutation of minor actinides, production of tritium and enrichment of spent nuclear fuel. A study of FNS facilities role in the Russian closed nuclear fuel cycle was performed. It showed that during ~100 years of operation three FNS facilities with fission power of 3 GW controlled by fusion neutron source with power of 40 MW can burn 98 tons of minor actinides and 198 tons of Pu-239 can be produced for startup loading of 20 fast reactors. Instead of Pu-239, up to 25 kg of tritium per year may be produced for startup of fusion reactors using blocks with lithium orthosilicate instead of fissile breeder blankets.

Keywords: fusion-fission hybrid system, conventional tokamak, superconducting electromagnetic system, two-layer vacuum vessel, subcritical active cores, nuclear fuel cycle

Procedia PDF Downloads 147
42 Particle Size Characteristics of Aerosol Jets Produced by a Low Powered E-Cigarette

Authors: Mohammad Shajid Rahman, Tarik Kaya, Edgar Matida

Abstract:

Electronic cigarettes, also known as e-cigarettes, may have become a tool to improve smoking cessation due to their ability to provide nicotine at a selected rate. Unlike traditional cigarettes, which produce toxic elements from tobacco combustion, e-cigarettes generate aerosols by heating a liquid solution (commonly a mixture of propylene glycol, vegetable glycerin, nicotine and some flavoring agents). However, caution still needs to be taken when using e-cigarettes due to the presence of addictive nicotine and some harmful substances produced from the heating process. Particle size distribution (PSD) and associated velocities generated by e-cigarettes have significant influence on aerosol deposition in different regions of human respiratory tracts. On another note, low actuation power is beneficial in aerosol generating devices since it exhibits a reduced emission of toxic chemicals. In case of e-cigarettes, lower heating powers can be considered as powers lower than 10 W compared to a wide range of powers (0.6 to 70.0 W) studied in literature. Due to the importance regarding inhalation risk reduction, deeper understanding of particle size characteristics of e-cigarettes demands thorough investigation. However, comprehensive study on PSD and velocities of e-cigarettes with a standard testing condition at relatively low heating powers is still lacking. The present study aims to measure particle number count and size distribution of undiluted aerosols of a latest fourth-generation e-cigarette at low powers, within 6.5 W using real-time particle counter (time-of-flight method). Also, temporal and spatial evolution of particle size and velocity distribution of aerosol jets are examined using phase Doppler anemometry (PDA) technique. To the authors’ best knowledge, application of PDA in e-cigarette aerosol measurement is rarely reported. In the present study, preliminary results about particle number count of undiluted aerosols measured by time-of-flight method depicted that an increase of heating power from 3.5 W to 6.5 W resulted in an enhanced asymmetricity in PSD, deviating from log-normal distribution. This can be considered as an artifact of rapid vaporization, condensation and coagulation processes on aerosols caused by higher heating power. A novel mathematical expression, combining exponential, Gaussian and polynomial (EGP) distributions, was proposed to describe asymmetric PSD successfully. The value of count median aerodynamic diameter and geometric standard deviation laid within a range of about 0.67 μm to 0.73 μm, and 1.32 to 1.43, respectively while the power varied from 3.5 W to 6.5 W. Laser Doppler velocimetry (LDV) and PDA measurement suggested a typical centerline streamwise mean velocity decay of aerosol jet along with a reduction of particle sizes. In the final submission, a thorough literature review, detailed description of experimental procedure and discussion of the results will be provided. Particle size and turbulent characteristics of aerosol jets will be further examined, analyzing arithmetic mean diameter, volumetric mean diameter, volume-based mean diameter, streamwise mean velocity and turbulence intensity. The present study has potential implications in PSD simulation and validation of aerosol dosimetry model, leading to improving related aerosol generating devices.

Keywords: E-cigarette aerosol, laser doppler velocimetry, particle size distribution, particle velocity, phase Doppler anemometry

Procedia PDF Downloads 49
41 Aeroelastic Stability Analysis in Turbomachinery Using Reduced Order Aeroelastic Model Tool

Authors: Chandra Shekhar Prasad, Ludek Pesek Prasad

Abstract:

In the present day fan blade of aero engine, turboprop propellers, gas turbine or steam turbine low-pressure blades are getting bigger, lighter and thus, become more flexible. Therefore, flutter, forced blade response and vibration related failure of the high aspect ratio blade are of main concern for the designers, thus need to be address properly in order to achieve successful component design. At the preliminary design stage large number of design iteration is need to achieve the utter free safe design. Most of the numerical method used for aeroelastic analysis is based on field-based methods such as finite difference method, finite element method, finite volume method or coupled. These numerical schemes are used to solve the coupled fluid Flow-Structural equation based on full Naiver-Stokes (NS) along with structural mechanics’ equations. These type of schemes provides very accurate results if modeled properly, however, they are computationally very expensive and need large computing recourse along with good personal expertise. Therefore, it is not the first choice for aeroelastic analysis during preliminary design phase. A reduced order aeroelastic model (ROAM) with acceptable accuracy and fast execution is more demanded at this stage. Similar ROAM are being used by other researchers for aeroelastic and force response analysis of turbomachinery. In the present paper new medium fidelity ROAM is successfully developed and implemented in numerical tool to simulated the aeroelastic stability phenomena in turbomachinery and well as flexible wings. In the present, a hybrid flow solver based on 3D viscous-inviscid coupled 3D panel method (PM) and 3d discrete vortex particle method (DVM) is developed, viscous parameters are estimated using boundary layer(BL) approach. This method can simulate flow separation and is a good compromise between accuracy and speed compared to CFD. In the second phase of the research work, the flow solver (PM) will be coupled with ROM non-linear beam element method (BEM) based FEM structural solver (with multibody capabilities) to perform the complete aeroelastic simulation of a steam turbine bladed disk, propellers, fan blades, aircraft wing etc. The partitioned based coupling approach is used for fluid-structure interaction (FSI). The numerical results are compared with experimental data for different test cases and for the blade cascade test case, experimental data is obtained from in-house lab experiments at IT CAS. Furthermore, the results from the new aeroelastic model will be compared with classical CFD-CSD based aeroelastic models. The proposed methodology for the aeroelastic stability analysis of gas turbine or steam turbine blades, or propellers or fan blades will provide researchers and engineers a fast, cost-effective and efficient tool for aeroelastic (classical flutter) analysis for different design at preliminary design stage where large numbers of design iteration are required in short time frame.

Keywords: aeroelasticity, beam element method (BEM), discrete vortex particle method (DVM), classical flutter, fluid-structure interaction (FSI), panel method, reduce order aeroelastic model (ROAM), turbomachinery, viscous-inviscid coupling

Procedia PDF Downloads 268
40 Internet of Things, Edge and Cloud Computing in Rock Mechanical Investigation for Underground Surveys

Authors: Esmael Makarian, Ayub Elyasi, Fatemeh Saberi, Olusegun Stanley Tomomewo

Abstract:

Rock mechanical investigation is one of the most crucial activities in underground operations, especially in surveys related to hydrocarbon exploration and production, geothermal reservoirs, energy storage, mining, and geotechnics. There is a wide range of traditional methods for driving, collecting, and analyzing rock mechanics data. However, these approaches may not be suitable or work perfectly in some situations, such as fractured zones. Cutting-edge technologies have been provided to solve and optimize the mentioned issues. Internet of Things (IoT), Edge, and Cloud Computing technologies (ECt & CCt, respectively) are among the most widely used and new artificial intelligence methods employed for geomechanical studies. IoT devices act as sensors and cameras for real-time monitoring and mechanical-geological data collection of rocks, such as temperature, movement, pressure, or stress levels. Structural integrity, especially for cap rocks within hydrocarbon systems, and rock mass behavior assessment, to further activities such as enhanced oil recovery (EOR) and underground gas storage (UGS), or to improve safety risk management (SRM) and potential hazards identification (P.H.I), are other benefits from IoT technologies. EC techniques can process, aggregate, and analyze data immediately collected by IoT on a real-time scale, providing detailed insights into the behavior of rocks in various situations (e.g., stress, temperature, and pressure), establishing patterns quickly, and detecting trends. Therefore, this state-of-the-art and useful technology can adopt autonomous systems in rock mechanical surveys, such as drilling and production (in hydrocarbon wells) or excavation (in mining and geotechnics industries). Besides, ECt allows all rock-related operations to be controlled remotely and enables operators to apply changes or make adjustments. It must be mentioned that this feature is very important in environmental goals. More often than not, rock mechanical studies consist of different data, such as laboratory tests, field operations, and indirect information like seismic or well-logging data. CCt provides a useful platform for storing and managing a great deal of volume and different information, which can be very useful in fractured zones. Additionally, CCt supplies powerful tools for predicting, modeling, and simulating rock mechanical information, especially in fractured zones within vast areas. Also, it is a suitable source for sharing extensive information on rock mechanics, such as the direction and size of fractures in a large oil field or mine. The comprehensive review findings demonstrate that digital transformation through integrated IoT, Edge, and Cloud solutions is revolutionizing traditional rock mechanical investigation. These advanced technologies have empowered real-time monitoring, predictive analysis, and data-driven decision-making, culminating in noteworthy enhancements in safety, efficiency, and sustainability. Therefore, by employing IoT, CCt, and ECt, underground operations have experienced a significant boost, allowing for timely and informed actions using real-time data insights. The successful implementation of IoT, CCt, and ECt has led to optimized and safer operations, optimized processes, and environmentally conscious approaches in underground geological endeavors.

Keywords: rock mechanical studies, internet of things, edge computing, cloud computing, underground surveys, geological operations

Procedia PDF Downloads 64
39 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data

Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone

Abstract:

The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.

Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine

Procedia PDF Downloads 241
38 Introducing Transport Engineering through Blended Learning Initiatives

Authors: Kasun P. Wijayaratna, Lauren Gardner, Taha Hossein Rashidi

Abstract:

Undergraduate students entering university across the last 2 to 3 years tend to be born during the middle years of the 1990s. This generation of students has been exposed to the internet and the desire and dependency on technology since childhood. Brains develop based on environmental influences and technology has wired this generation of student to be attuned to sophisticated complex visual imagery, indicating visual forms of learning may be more effective than the traditional lecture or discussion formats. Furthermore, post-millennials perspectives on career are not focused solely on stability and income but are strongly driven by interest, entrepreneurship and innovation. Accordingly, it is important for educators to acknowledge the generational shift and tailor the delivery of learning material to meet the expectations of the students and the needs of industry. In the context of transport engineering, effectively teaching undergraduate students the basic principles of transport planning, traffic engineering and highway design is fundamental to the progression of the profession from a practice and research perspective. Recent developments in technology have transformed the discipline as practitioners and researchers move away from the traditional “pen and paper” approach to methods involving the use of computer programs and simulation. Further, enhanced accessibility of technology for students has changed the way they understand and learn material being delivered at tertiary education institutions. As a consequence, blended learning approaches, which aim to integrate face to face teaching with flexible self-paced learning resources, have become prevalent to provide scalable education that satisfies the expectations of students. This research study involved the development of a series of ‘Blended Learning’ initiatives implemented within an introductory transport planning and geometric design course, CVEN2401: Sustainable Transport and Highway Engineering, taught at the University of New South Wales, Australia. CVEN2401 was modified by conducting interactive polling exercises during lectures, including weekly online quizzes, offering a series of supplementary learning videos, and implementing a realistic design project that students needed to complete using modelling software that is widely used in practice. These activities and resources were aimed to improve the learning environment for a large class size in excess of 450 students and to ensure that practical industry valued skills were introduced. The case study compared the 2016 and 2017 student cohorts based on their performance across assessment tasks as well as their reception to the material revealed through student feedback surveys. The initiatives were well received with a number of students commenting on the ability to complete self-paced learning and an appreciation of the exposure to a realistic design project. From an educator’s perspective, blending the course made it feasible to interact and engage with students. Personalised learning opportunities were made available whilst delivering a considerable volume of complex content essential for all undergraduate Civil and Environmental Engineering students. Overall, this case study highlights the value of blended learning initiatives, especially in the context of large class size university courses.

Keywords: blended learning, highway design, teaching, transport planning

Procedia PDF Downloads 149
37 Deep-Learning Coupled with Pragmatic Categorization Method to Classify the Urban Environment of the Developing World

Authors: Qianwei Cheng, A. K. M. Mahbubur Rahman, Anis Sarker, Abu Bakar Siddik Nayem, Ovi Paul, Amin Ahsan Ali, M. Ashraful Amin, Ryosuke Shibasaki, Moinul Zaber

Abstract:

Thomas Friedman, in his famous book, argued that the world in this 21st century is flat and will continue to be flatter. This is attributed to rapid globalization and the interdependence of humanity that engendered tremendous in-flow of human migration towards the urban spaces. In order to keep the urban environment sustainable, policy makers need to plan based on extensive analysis of the urban environment. With the advent of high definition satellite images, high resolution data, computational methods such as deep neural network analysis, and hardware capable of high-speed analysis; urban planning is seeing a paradigm shift. Legacy data on urban environments are now being complemented with high-volume, high-frequency data. However, the first step of understanding urban space lies in useful categorization of the space that is usable for data collection, analysis, and visualization. In this paper, we propose a pragmatic categorization method that is readily usable for machine analysis and show applicability of the methodology on a developing world setting. Categorization to plan sustainable urban spaces should encompass the buildings and their surroundings. However, the state-of-the-art is mostly dominated by classification of building structures, building types, etc. and largely represents the developed world. Hence, these methods and models are not sufficient for developing countries such as Bangladesh, where the surrounding environment is crucial for the categorization. Moreover, these categorizations propose small-scale classifications, which give limited information, have poor scalability and are slow to compute in real time. Our proposed method is divided into two steps-categorization and automation. We categorize the urban area in terms of informal and formal spaces and take the surrounding environment into account. 50 km × 50 km Google Earth image of Dhaka, Bangladesh was visually annotated and categorized by an expert and consequently a map was drawn. The categorization is based broadly on two dimensions-the state of urbanization and the architectural form of urban environment. Consequently, the urban space is divided into four categories: 1) highly informal area; 2) moderately informal area; 3) moderately formal area; and 4) highly formal area. In total, sixteen sub-categories were identified. For semantic segmentation and automatic categorization, Google’s DeeplabV3+ model was used. The model uses Atrous convolution operation to analyze different layers of texture and shape. This allows us to enlarge the field of view of the filters to incorporate larger context. Image encompassing 70% of the urban space was used to train the model, and the remaining 30% was used for testing and validation. The model is able to segment with 75% accuracy and 60% Mean Intersection over Union (mIoU). In this paper, we propose a pragmatic categorization method that is readily applicable for automatic use in both developing and developed world context. The method can be augmented for real-time socio-economic comparative analysis among cities. It can be an essential tool for the policy makers to plan future sustainable urban spaces.

Keywords: semantic segmentation, urban environment, deep learning, urban building, classification

Procedia PDF Downloads 192
36 Empowering Women Entrepreneurs in Rural India through Developing Online Communities of Purpose Using Social Technologies

Authors: Jayanta Basak, Somprakash Bandyopadhyay, Parama Bhaumik, Siuli Roy

Abstract:

To solve the life and livelihood related problems of socially and economically backward rural women in India, several Women Self-Help Groups (WSHG) are formed in Indian villages. WSHGs are micro-communities (with 10-to 15 members) within a village community. WSHGs have been conceived not just to promote savings and provide credit, but also to act as a vehicle of change through the creation of women micro-entrepreneurs at the village level. However, in spite of huge investment and volume of people involved in the whole process, the success is still limited. Most of these entrepreneurial activities happen in small household workspaces where sales are limited to the inconsistent and unpredictable local markets. As a result, these entrepreneurs are perennially trapped in the vicious cycle of low risk taking ability, low investment capacity, low productivity, weak market linkages and low revenue. Market separation including customer-producer separation is one of the key problems in this domain. Researchers suggest that there are four types of market separation: (i) spatial, (ii) financial, (iii) temporal, and (iv) informational, which in turn impacts the nature of markets and marketing. In this context, a large group of intermediaries (the 'middleman') plays important role in effectively reducing the factors that separate markets by utilizing the resource of rural entrepreneurs, their products and thus, accelerate market development. The rural entrepreneurs are heavily dependent on these middlemen for marketing of their products and these middlemen exploit rural entrepreneurs by creating a huge informational separation between the rural producers and end-consumers in the market and thus hiding the profit margins. The objective of this study is to develop a transparent, online communities of purpose among rural and urban entrepreneurs using internet and web 2.0 technologies in order to decrease market separation and improve mutual awareness of available and potential products and market demands. Communities of purpose are groups of people who have an ability to influence, can share knowledge and learn from others, and be committed to achieving a common purpose. In this study, a cluster of SHG women located in a village 'Kandi' of West Bengal, India has been studied closely for six months. These women are primarily engaged in producing garments, soft toys, fabric painting on clothes, etc. These women were equipped with internet-enabled smart-phones where they can use chat applications in local language and common social networking websites like Facebook, Instagram, etc. A few handicraft experts and micro-entrepreneurs from the city (the 'seed') were included in their mobile messaging app group that enables the creation of a 'community of purpose' in order to share thoughts and ideas on product designs, market trends, and practices, and thus decrease the rural-urban market separation. After six months of regular group interaction in mobile messaging app among these rural-urban community members, it is observed that SHG women are empowered now to share their product images, design ideas, showcase, and promote their products in global marketplace using some common social networking websites through which they can also enhance and augment their community of purpose.

Keywords: communities of purpose, market separation, self-help group, social technologies

Procedia PDF Downloads 257
35 The Optimization of Topical Antineoplastic Therapy Using Controlled Release Systems Based on Amino-functionalized Mesoporous Silica

Authors: Lacramioara Ochiuz, Aurelia Vasile, Iulian Stoleriu, Cristina Ghiciuc, Maria Ignat

Abstract:

Topical administration of chemotherapeutic agents (eg. carmustine, bexarotene, mechlorethamine etc.) in local treatment of cutaneous T-cell lymphoma (CTCL) is accompanied by multiple side effects, such as contact hypersensitivity, pruritus, skin atrophy or even secondary malignancies. A known method of reducing the side effects of anticancer agent is the development of modified drug release systems using drug incapsulation in biocompatible nanoporous inorganic matrices, such as mesoporous MCM-41 silica. Mesoporous MCM-41 silica is characterized by large specific surface, high pore volume, uniform porosity, and stable dispersion in aqueous medium, excellent biocompatibility, in vivo biodegradability and capacity to be functionalized with different organic groups. Therefore, MCM-41 is an attractive candidate for a wide range of biomedical applications, such as controlled drug release, bone regeneration, protein immobilization, enzymes, etc. The main advantage of this material lies in its ability to host a large amount of the active substance in uniform pore system with adjustable size in a mesoscopic range. Silanol groups allow surface controlled functionalization leading to control of drug loading and release. This study shows (I) the amino-grafting optimization of mesoporous MCM-41 silica matrix by means of co-condensation during synthesis and post-synthesis using APTES (3-aminopropyltriethoxysilane); (ii) loading the therapeutic agent (carmustine) obtaining a modified drug release systems; (iii) determining the profile of in vitro carmustine release from these systems; (iv) assessment of carmustine release kinetics by fitting on four mathematical models. Obtained powders have been described in terms of structure, texture, morphology thermogravimetric analysis. The concentration of the therapeutic agent in the dissolution medium has been determined by HPLC method. In vitro dissolution tests have been done using cell Enhancer in a 12 hours interval. Analysis of carmustine release kinetics from mesoporous systems was made by fitting to zero-order model, first-order model Higuchi model and Korsmeyer-Peppas model, respectively. Results showed that both types of highly ordered mesoporous silica (amino grafted by co-condensation process or post-synthesis) are thermally stable in aqueous medium. In what regards the degree of loading and efficiency of loading with the therapeutic agent, there has been noticed an increase of around 10% in case of co-condensation method application. This result shows that direct co-condensation leads to even distribution of amino groups on the pore walls while in case of post-synthesis grafting many amino groups are concentrated near the pore opening and/or on external surface. In vitro dissolution tests showed an extended carmustine release (more than 86% m/m) both from systems based on silica functionalized directly by co-condensation and after synthesis. Assessment of carmustine release kinetics revealed a release through diffusion from all studied systems as a result of fitting to Higuchi model. The results of this study proved that amino-functionalized mesoporous silica may be used as a matrix for optimizing the anti-cancer topical therapy by loading carmustine and developing prolonged-release systems.

Keywords: carmustine, silica, controlled, release

Procedia PDF Downloads 264
34 Gis Based Flash Flood Runoff Simulation Model of Upper Teesta River Besin - Using Aster Dem and Meteorological Data

Authors: Abhisek Chakrabarty, Subhraprakash Mandal

Abstract:

Flash flood is one of the catastrophic natural hazards in the mountainous region of India. The recent flood in the Mandakini River in Kedarnath (14-17th June, 2013) is a classic example of flash floods that devastated Uttarakhand by killing thousands of people.The disaster was an integrated effect of high intensityrainfall, sudden breach of Chorabari Lake and very steep topography. Every year in Himalayan Region flash flood occur due to intense rainfall over a short period of time, cloud burst, glacial lake outburst and collapse of artificial check dam that cause high flow of river water. In Sikkim-Derjeeling Himalaya one of the probable flash flood occurrence zone is Teesta Watershed. The Teesta River is a right tributary of the Brahmaputra with draining mountain area of approximately 8600 Sq. km. It originates in the Pauhunri massif (7127 m). The total length of the mountain section of the river amounts to 182 km. The Teesta is characterized by a complex hydrological regime. The river is fed not only by precipitation, but also by melting glaciers and snow as well as groundwater. The present study describes an attempt to model surface runoff in upper Teesta basin, which is directly related to catastrophic flood events, by creating a system based on GIS technology. The main object was to construct a direct unit hydrograph for an excess rainfall by estimating the stream flow response at the outlet of a watershed. Specifically, the methodology was based on the creation of a spatial database in GIS environment and on data editing. Moreover, rainfall time-series data collected from Indian Meteorological Department and they were processed in order to calculate flow time and the runoff volume. Apart from the meteorological data, background data such as topography, drainage network, land cover and geological data were also collected. Clipping the watershed from the entire area and the streamline generation for Teesta watershed were done and cross-sectional profiles plotted across the river at various locations from Aster DEM data using the ERDAS IMAGINE 9.0 and Arc GIS 10.0 software. The analysis of different hydraulic model to detect flash flood probability ware done using HEC-RAS, Flow-2D, HEC-HMS Software, which were of great importance in order to achieve the final result. With an input rainfall intensity above 400 mm per day for three days the flood runoff simulation models shows outbursts of lakes and check dam individually or in combination with run-off causing severe damage to the downstream settlements. Model output shows that 313 Sq. km area were found to be most vulnerable to flash flood includes Melli, Jourthang, Chungthang, and Lachung and 655sq. km. as moderately vulnerable includes Rangpo,Yathang, Dambung,Bardang, Singtam, Teesta Bazarand Thangu Valley. The model was validated by inserting the rain fall data of a flood event took place in August 1968, and 78% of the actual area flooded reflected in the output of the model. Lastly preventive and curative measures were suggested to reduce the losses by probable flash flood event.

Keywords: flash flood, GIS, runoff, simulation model, Teesta river basin

Procedia PDF Downloads 318
33 The Use of Non-Parametric Bootstrap in Computing of Microbial Risk Assessment from Lettuce Consumption Irrigated with Contaminated Water by Sanitary Sewage in Infulene Valley

Authors: Mario Tauzene Afonso Matangue, Ivan Andres Sanchez Ortiz

Abstract:

The Metropolitan area of Maputo (Mozambique Capital City) is located in semi-arid zone (800 mm annual rainfall) with 1101170 million inhabitants. On the west side, there are the flatlands of Infulene where the Mulauze River flows towards to the Indian Ocean, receiving at this site, the storm water contaminated with sanitary sewage from Maputo, transported through a concrete open channel. In Infulene, local communities grow salads crops such as tomato, onion, garlic, lettuce, and cabbage, which are then commercialized and consumed in several markets in Maputo City. Lettuce is the most daily consumed salad crop in different meals, generally in fast-foods, breakfasts, lunches, and dinners. However, the risk of infection by several pathogens due to the consumption of lettuce, using the Quantitative Microbial Risk Assessment (QMRA) tools, is still unknown since there are few studies or publications concerning to this matter in Mozambique. This work is aimed at determining the annual risk arising from the consumption of lettuce grown in Infulene valley, in Maputo, using QMRA tools. The exposure model was constructed upon the volume of contaminated water remaining in the lettuce leaves, the empirical relations between the number of pathogens and the indicator of microorganisms (E. coli), the consumption of lettuce (g) and reduction of pathogens (days). The reference pathogens were Vibrio cholerae, Cryptosporidium, norovirus, and Ascaris. The water quality samples (E. coli) were collected in the storm water channel from January 2016 to December 2018, comprising 65 samples, and the urban lettuce consumption data were collected through inquiry in Maputo Metropolis covering 350 persons. A non-parametric bootstrap was performed involving 10,000 iterations over the collected dataset, namely, water quality (E. coli) and lettuce consumption. The dose-response models were: Exponential for Cryptosporidium, Kummer Confluent hypergeomtric function (1F1) for Vibrio and Ascaris Gaussian hypergeometric function (2F1-(a,b;c;z) for norovirus. The annual infection risk estimates were performed using R 3.6.0 (CoreTeam) software by Monte Carlo (Latin hypercubes), a sampling technique involving 10,000 iterations. The annual infection risks values expressed by Median and the 95th percentile, per person per year (pppy) arising from the consumption of lettuce are as follows: Vibrio cholerae (1.00, 1.00), Cryptosporidium (3.91x10⁻³, 9.72x 10⁻³), nororvirus (5.22x10⁻¹, 9.99x10⁻¹) and Ascaris (2.59x10⁻¹, 9.65x10⁻¹). Thus, the consumption of the lettuce would result in greater risks than the tolerable levels ( < 10⁻³ pppy or 10⁻⁶ DALY) for all pathogens, and the Vibrio cholerae is the most virulent pathogens, according to the hit-single models followed by the Ascaris lumbricoides and norovirus. The sensitivity analysis carried out in this work pointed out that in the whole QMRA, the most important input variable was the reduction of pathogens (Spearman rank value was 0.69) between harvest and consumption followed by water quality (Spearman rank value was 0.69). The decision-makers (Mozambique Government) must strengthen the prevention measures related to pathogens reduction in lettuce (i.e., washing) and engage in wastewater treatment engineering.

Keywords: annual infections risk, lettuce, non-parametric bootstrapping, quantitative microbial risk assessment tools

Procedia PDF Downloads 122
32 Colloid-Based Biodetection at Aqueous Electrical Interfaces Using Fluidic Dielectrophoresis

Authors: Francesca Crivellari, Nicholas Mavrogiannis, Zachary Gagnon

Abstract:

Portable diagnostic methods have become increasingly important for a number of different purposes: point-of-care screening in developing nations, environmental contamination studies, bio/chemical warfare agent detection, and end-user use for commercial health monitoring. The cheapest and most portable methods currently available are paper-based – lateral flow and dipstick methods are widely available in drug stores for use in pregnancy detection and blood glucose monitoring. These tests are successful because they are cheap to produce, easy to use, and require minimally invasive sampling. While adequate for their intended uses, in the realm of blood-borne pathogens and numerous cancers, these paper-based methods become unreliable, as they lack the nM/pM sensitivity currently achieved by clinical diagnostic methods. Clinical diagnostics, however, utilize techniques involving surface plasmon resonance (SPR) and enzyme-linked immunosorbent assays (ELISAs), which are expensive and unfeasible in terms of portability. To develop a better, competitive biosensor, we must reduce the cost of one, or increase the sensitivity of the other. Electric fields are commonly utilized in microfluidic devices to manipulate particles, biomolecules, and cells. Applications in this area, however, are primarily limited to interfaces formed between immiscible interfaces. Miscible, liquid-liquid interfaces are common in microfluidic devices, and are easily reproduced with simple geometries. Here, we demonstrate the use of electrical fields at liquid-liquid electrical interfaces, known as fluidic dielectrophoresis, (fDEP) for biodetection in a microfluidic device. In this work, we apply an AC electric field across concurrent laminar streams with differing conductivities and permittivities to polarize the interface and induce a discernible, near-immediate, frequency-dependent interfacial tilt. We design this aqueous electrical interface, which becomes the biosensing “substrate,” to be intelligent – it “moves” only when a target of interest is present. This motion requires neither labels nor expensive electrical equipment, so the biosensor is inexpensive and portable, yet still capable of sensitive detection. Nanoparticles, due to their high surface-area-to-volume ratio, are often incorporated to enhance detection capabilities of schemes like SPR and fluorimetric assays. Most studies currently investigate binding at an immobilized solid-liquid or solid-gas interface, where particles are adsorbed onto a planar surface, functionalized with a receptor to create a reactive substrate, and subsequently flushed with a fluid or gas with the relevant analyte. These typically involve many preparation and rinsing steps, and are susceptible to surface fouling. Our microfluidic device is continuously flowing and renewing the “substrate,” and is thus not subject to fouling. In this work, we demonstrate the ability to electrokinetically detect biomolecules binding to functionalized nanoparticles at liquid-liquid interfaces using fDEP. In biotin-streptavidin experiments, we report binding detection limits on the order of 1-10 pM, without amplifying signals or concentrating samples. We also demonstrate the ability to detect this interfacial motion, and thus the presence of binding, using impedance spectroscopy, allowing this scheme to become non-optical, in addition to being label-free.

Keywords: biodetection, dielectrophoresis, microfluidics, nanoparticles

Procedia PDF Downloads 388
31 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning

Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher

Abstract:

Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.

Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping

Procedia PDF Downloads 138
30 Structured-Ness and Contextual Retrieval Underlie Language Comprehension

Authors: Yao-Ying Lai, Maria Pinango, Ashwini Deo

Abstract:

While grammatical devices are essential to language processing, how comprehension utilizes cognitive mechanisms is less emphasized. This study addresses this issue by probing the complement coercion phenomenon: an entity-denoting complement following verbs like begin and finish receives an eventive interpretation. For example, (1) “The queen began the book” receives an agentive reading like (2) “The queen began [reading/writing/etc.…] the book.” Such sentences engender additional processing cost in real-time comprehension. The traditional account attributes this cost to an operation that coerces the entity-denoting complement to an event, assuming that these verbs require eventive complements. However, in closer examination, examples like “Chapter 1 began the book” undermine this assumption. An alternative, Structured Individual (SI) hypothesis, proposes that the complement following aspectual verbs (AspV; e.g. begin, finish) is conceptualized as a structured individual, construed as an axis along various dimensions (e.g. spatial, eventive, temporal, informational). The composition of an animate subject and an AspV such as (1) engenders an ambiguity between an agentive reading along the eventive dimension like (2), and a constitutive reading along the informational/spatial dimension like (3) “[The story of the queen] began the book,” in which the subject is interpreted as a subpart of the complement denotation. Comprehenders need to resolve the ambiguity by searching contextual information, resulting in additional cost. To evaluate the SI hypothesis, a questionnaire was employed. Method: Target AspV sentences such as “Shakespeare began the volume.” were preceded by one of the following types of context sentence: (A) Agentive-biasing, in which an event was mentioned (…writers often read…), (C) Constitutive-biasing, in which a constitutive meaning was hinted (Larry owns collections of Renaissance literature.), (N) Neutral context, which allowed both interpretations. Thirty-nine native speakers of English were asked to (i) rate each context-target sentence pair from a 1~5 scale (5=fully understandable), and (ii) choose possible interpretations for the target sentence given the context. The SI hypothesis predicts that comprehension is harder for the Neutral condition, as compared to the biasing conditions because no contextual information is provided to resolve an ambiguity. Also, comprehenders should obtain the specific interpretation corresponding to the context type. Results: (A) Agentive-biasing and (C) Constitutive-biasing were rated higher than (N) Neutral conditions (p< .001), while all conditions were within the acceptable range (> 3.5 on the 1~5 scale). This suggests that when lacking relevant contextual information, semantic ambiguity decreases comprehensibility. The interpretation task shows that the participants selected the biased agentive/constitutive reading for condition (A) and (C) respectively. For the Neutral condition, the agentive and constitutive readings were chosen equally often. Conclusion: These findings support the SI hypothesis: the meaning of AspV sentences is conceptualized as a parthood relation involving structured individuals. We argue that semantic representation makes reference to spatial structured-ness (abstracted axis). To obtain an appropriate interpretation, comprehenders utilize contextual information to enrich the conceptual representation of the sentence in question. This study connects semantic structure to human’s conceptual structure, and provides a processing model that incorporates contextual retrieval.

Keywords: ambiguity resolution, contextual retrieval, spatial structured-ness, structured individual

Procedia PDF Downloads 333
29 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques

Authors: Stefan K. Behfar

Abstract:

The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.

Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing

Procedia PDF Downloads 78
28 Application of Electrical Resistivity Surveys on Constraining Causes of Highway Pavement Failure along Ajaokuta-Anyigba Road, North Central Nigeria

Authors: Moroof, O. Oloruntola, Sunday Oladele, Daniel, O. Obasaju, Victor, O Ojekunle, Olateju, O. Bayewu, Ganiyu, O. Mosuro

Abstract:

Integrated geophysical methods involving Vertical Electrical Sounding (VES) and 2D resistivity survey were deployed to gain an insight into the influence of the two varying rock types (mica-schist and granite gneiss) underlying the road alignment to the incessant highway failure along Ajaokuta-Anyigba, North-central Nigeria. The highway serves as a link-road for the single largest cement factory in Africa (Dangote Cement Factory) and two major ceramic industries to the capital (Abuja) via Lokoja. 2D Electrical Resistivity survey (Dipole-Dipole Array) and Vertical Electrical Sounding (VES) (Schlumberger array) were employed. Twenty-two (22) 2D profiles were occupied, twenty (20) conducted about 1 m away from the unstable section underlain by mica-schist with profile length each of approximately 100 m. Two (2) profiles were conducted about 1 m away from the stable section with a profile length of 100 m each due to barriers caused by the drainage system and outcropping granite gneiss at the flanks of the road. A spacing of 2 m was used for good image resolution of the near-surface. On each 2D profile, a range of 1-3 VES was conducted; thus, forty-eight (48) soundings were acquired. Partial curve matching and WinResist software were used to obtain the apparent and true resistivity values of the 1D survey, while DiprofWin software was used for processing the 2-D survey. Two exposed lithologic sections caused by abandoned river channels adjacent to two profiles as well as the knowledge of the geology of the area helped to constrain the VES and 2D processing and interpretation. Generally, the resistivity values obtained reflect the parent rock type, degree of weathering, moisture content and competency of the tested area. Resistivity values < 100; 100 – 950; 1000 – 2000 and > 2500 ohms-m were interpreted as clay, weathered layer, partly weathered layer and fresh basement respectively. The VES results and 2-D resistivity structures along the unstable segment showed similar lithologic characteristics and sequences dominated by clayey substratum for depths range of 0 – 42.2 m. The clayey substratum is a product of intensive weathering of the parent rock (mica-schist) and constitutes weak foundation soils, causing highway failure. This failure is further exacerbated by several heavy-duty trucks which ply the section round the clock due to proximity to two major ceramic industries in the state and lack of drainage system. The two profiles on the stable section show 2D structures that are remarkably different from those of the unstable section with very thin topsoils, higher resistivity weathered substratum (indicating the presence of coarse fragments from the parent rock) and shallow depth to the basement (1.0 – 7. 1 m). Also, the presence of drainage and lower volume of heavy-duty trucks are contributors to the pavement stability of this section of the highway. The resistivity surveys effectively delineated two contrasting soil profiles of the subbase/subgrade that reflect variation in the mineralogy of underlying parent rocks.

Keywords: clay, geophysical methods, pavement, resistivity

Procedia PDF Downloads 168
27 Electronic Raman Scattering Calibration for Quantitative Surface-Enhanced Raman Spectroscopy and Improved Biostatistical Analysis

Authors: Wonil Nam, Xiang Ren, Inyoung Kim, Masoud Agah, Wei Zhou

Abstract:

Despite its ultrasensitive detection capability, surface-enhanced Raman spectroscopy (SERS) faces challenges as a quantitative biochemical analysis tool due to the significant dependence of local field intensity in hotspots on nanoscale geometric variations of plasmonic nanostructures. Therefore, despite enormous progress in plasmonic nanoengineering of high-performance SERS devices, it is still challenging to quantitatively correlate the measured SERS signals with the actual molecule concentrations at hotspots. A significant effort has been devoted to developing SERS calibration methods by introducing internal standards. It has been achieved by placing Raman tags at plasmonic hotspots. Raman tags undergo similar SERS enhancement at the same hotspots, and ratiometric SERS signals for analytes of interest can be generated with reduced dependence on geometrical variations. However, using Raman tags still faces challenges for real-world applications, including spatial competition between the analyte and tags in hotspots, spectral interference, laser-induced degradation/desorption due to plasmon-enhanced photochemical/photothermal effects. We show that electronic Raman scattering (ERS) signals from metallic nanostructures at hotspots can serve as the internal calibration standard to enable quantitative SERS analysis and improve biostatistical analysis. We perform SERS with Au-SiO₂ multilayered metal-insulator-metal nano laminated plasmonic nanostructures. Since the ERS signal is proportional to the volume density of electron-hole occupation in hotspots, the ERS signals exponentially increase when the wavenumber is approaching the zero value. By a long-pass filter, generally used in backscattered SERS configurations, to chop the ERS background continuum, we can observe an ERS pseudo-peak, IERS. Both ERS and SERS processes experience the |E|⁴ local enhancements during the excitation and inelastic scattering transitions. We calibrated IMRS of 10 μM Rhodamine 6G in solution by IERS. The results show that ERS calibration generates a new analytical value, ISERS/IERS, insensitive to variations from different hotspots and thus can quantitatively reflect the molecular concentration information. Given the calibration capability of ERS signals, we performed label-free SERS analysis of living biological systems using four different breast normal and cancer cell lines cultured on nano-laminated SERS devices. 2D Raman mapping over 100 μm × 100 μm, containing several cells, was conducted. The SERS spectra were subsequently analyzed by multivariate analysis using partial least square discriminant analysis. Remarkably, after ERS calibration, MCF-10A and MCF-7 cells are further separated while the two triple-negative breast cancer cells (MDA-MB-231 and HCC-1806) are more overlapped, in good agreement with the well-known cancer categorization regarding the degree of malignancy. To assess the strength of ERS calibration, we further carried out a drug efficacy study using MDA-MB-231 and different concentrations of anti-cancer drug paclitaxel (PTX). After ERS calibration, we can more clearly segregate the control/low-dosage groups (0 and 1.5 nM), the middle-dosage group (5 nM), and the group treated with half-maximal inhibitory concentration (IC50, 15 nM). Therefore, we envision that ERS calibrated SERS can find crucial opportunities in label-free molecular profiling of complicated biological systems.

Keywords: cancer cell drug efficacy, plasmonics, surface-enhanced Raman spectroscopy (SERS), SERS calibration

Procedia PDF Downloads 138
26 The Plight of the Rohingyas: Design Guidelines to Accommodate Displaced People in Bangladesh

Authors: Nazia Roushan, Maria Kipti

Abstract:

The sensitive issue of a large-scale entry of Rohingya refugees to Bangladesh has arisen again since August of 2017. Incited by ethnic and religious conflict, the Rohingyas—an ethnic group concentrated in the north-west state of Rakhine in Myanmar—have been fleeing to what is now Bangladesh from as early as the late 1700s in four main exoduses. This long-standing persecution has recently escalated, and accommodating the recent wave of exodus has been especially challenging due to the sheer volume of a million refugees concentrated in refugee camps in two small administrative units (upazilas) in the south-east of the country: the host area. This drastic change in the host area’s social fabric is putting a lot of strain on the country’s economic, demographic and environmental stability, and security. Although Bangladesh’s long-term experience with disaster management has enabled it to respond rapidly to the crisis, the government is failing to cope with this enormous problem and has taken insufficient steps towards improving the living conditions to inhibit the inflow of more refugees. On top of that, the absence of a comprehensive national refugee policy, and the density of the structures of the camps are constricting the upgrading of the shelters to international standards. As of December 2016, the combined number of internally displaced persons (IDPs) due to conflict and violence (stock), and new displacements due to disasters (flow) in Bangladesh had exceeded 1 million. These numbers have increased dramatically in the last few months. Moreover, by 2050, Bangladesh will have as much as 25 million climate refugees just from its coastal districts. To enhance the resilience of the vulnerable, it is crucial to methodically factorize further interventions between Disaster Risk Reduction for Resilience (DRR) and the concept of Building Back Better (BBB) in the rehabilitation-reconstruction period. Considering these points, this paper provides a palette of options for design guidelines related to the living spaces and infrastructures for refugees. This will encourage the development of national standards for refugee camps, and the national and local level rehabilitation-reconstruction practices. Unhygienic living conditions, vulnerability, and the general lack of control over life are pervasive throughout the camps. This paper, therefore, proposes site-specific strategic and physical planning and design for shelters for refugees in Bangladesh that will lead to sustainable living environments through the following: a) site survey of existing two registered and one makeshift unregistered refugee camps to document and study their physical conditions, b) questionnaires and semi-structured focus group discussions carried out among the refugees and stakeholders to understand what the lived experiences and needs are; and c) combining the findings with international minimum standards for shelter and settlement from International Federation of Red Cross and Red Crescent (IFRC), Médecins Sans Frontières (MSF), United Nations High Commissioner for Refugees (UNHCR). These proposals include temporary shelter solutions that balance between lived spaces and regimented, repetitive plans using readily available and cheap materials, erosion control and slope stabilization strategies, and most importantly, coping mechanisms for the refugees to be self-reliant and resilient.

Keywords: architecture, Bangladesh, refugee camp, resilience, Rohingya

Procedia PDF Downloads 237
25 Reverse Logistics Network Optimization for E-Commerce

Authors: Albert W. K. Tan

Abstract:

This research consolidates a comprehensive array of publications from peer-reviewed journals, case studies, and seminar reports focused on reverse logistics and network design. By synthesizing this secondary knowledge, our objective is to identify and articulate key decision factors crucial to reverse logistics network design for e-commerce. Through this exploration, we aim to present a refined mathematical model that offers valuable insights for companies seeking to optimize their reverse logistics operations. The primary goal of this research endeavor is to develop a comprehensive framework tailored to advising organizations and companies on crafting effective networks for their reverse logistics operations, thereby facilitating the achievement of their organizational goals. This involves a thorough examination of various network configurations, weighing their advantages and disadvantages to ensure alignment with specific business objectives. The key objectives of this research include: (i) Identifying pivotal factors pertinent to network design decisions within the realm of reverse logistics across diverse supply chains. (ii) Formulating a structured framework designed to offer informed recommendations for sound network design decisions applicable to relevant industries and scenarios. (iii) Propose a mathematical model to optimize its reverse logistics network. A conceptual framework for designing a reverse logistics network has been developed through a combination of insights from the literature review and information gathered from company websites. This framework encompasses four key stages in the selection of reverse logistics operations modes: (1) Collection, (2) Sorting and testing, (3) Processing, and (4) Storage. Key factors to consider in reverse logistics network design: I) Centralized vs. decentralized processing: Centralized processing, a long-standing practice in reverse logistics, has recently gained greater attention from manufacturing companies. In this system, all products within the reverse logistics pipeline are brought to a central facility for sorting, processing, and subsequent shipment to their next destinations. Centralization offers the advantage of efficiently managing the reverse logistics flow, potentially leading to increased revenues from returned items. Moreover, it aids in determining the most appropriate reverse channel for handling returns. On the contrary, a decentralized system is more suitable when products are returned directly from consumers to retailers. In this scenario, individual sales outlets serve as gatekeepers for processing returns. Considerations encompass the product lifecycle, product value and cost, return volume, and the geographic distribution of returns. II) In-house vs. third-party logistics providers: The decision between insourcing and outsourcing in reverse logistics network design is pivotal. In insourcing, a company handles the entire reverse logistics process, including material reuse. In contrast, outsourcing involves third-party providers taking on various aspects of reverse logistics. Companies may choose outsourcing due to resource constraints or lack of expertise, with the extent of outsourcing varying based on factors such as personnel skills and cost considerations. Based on the conceptual framework, the authors have constructed a mathematical model that optimizes reverse logistics network design decisions. The model will consider key factors identified in the framework, such as transportation costs, facility capacities, and lead times. The authors have employed mixed LP to find the optimal solutions that minimize costs while meeting organizational objectives.

Keywords: reverse logistics, supply chain management, optimization, e-commerce

Procedia PDF Downloads 41