Search results for: dry high speed
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21392

Search results for: dry high speed

13202 Simulation of the Flow in a Circular Vertical Spillway Using a Numerical Model

Authors: Mohammad Zamani, Ramin Mansouri

Abstract:

Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. A circular vertical spillway with various inlet forms is very effective when there is not enough space for the other spillway. Hydraulic flow in a vertical circular spillway is divided into three groups: free, orifice, and under pressure (submerged). In this research, the hydraulic flow characteristics of a Circular Vertical Spillway are investigated with the CFD model. Two-dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k-ε and k-ω, were chosen to model Reynolds shear stress term. The power law scheme was used for the discretization of momentum, k, ε, and ω equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. In this study, three types of computational grids (coarse, intermediate, and fine) were used to discriminate the simulation environment. In order to simulate the flow, the k-ε (Standard, RNG, Realizable) and k-ω (standard and SST) models were used. Also, in order to find the best wall function, two types, standard wall, and non-equilibrium wall function, were investigated. The laminar model did not produce satisfactory flow depth and velocity along the Morning-Glory spillway. The results of the most commonly used two-equation turbulence models (k-ε and k-ω) were identical. Furthermore, the standard wall function produced better results compared to the non-equilibrium wall function. Thus, for other simulations, the standard k-ε with the standard wall function was preferred. The comparison criterion in this study is also the trajectory profile of jet water. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k-ε (Standard) has the most consistent results with experimental results. When the jet gets closer to the end of the basin, the computational results increase with the numerical results of their differences. The mesh with 10602 nodes, turbulent model k-ε standard and the standard wall function, provide the best results for modeling the flow in a vertical circular Spillway. There was a good agreement between numerical and experimental results in the upper and lower nappe profiles. In the study of water level over crest and discharge, in low water levels, the results of numerical modeling are good agreement with the experimental, but with the increasing water level, the difference between the numerical and experimental discharge is more. In the study of the flow coefficient, by decreasing in P/R ratio, the difference between the numerical and experimental result increases.

Keywords: circular vertical, spillway, numerical model, boundary conditions

Procedia PDF Downloads 68
13201 Through Additive Manufacturing. A New Perspective for the Mass Production of Made in Italy Products

Authors: Elisabetta Cianfanelli, Paolo Pupparo, Maria Claudia Coppola

Abstract:

The recent evolutions in the innovation processes and in the intrinsic tendencies of the product development process, lead to new considerations on the design flow. The instability and complexity that contemporary life describes, defines new problems in the production of products, stimulating at the same time the adoption of new solutions across the entire design process. The advent of Additive Manufacturing, but also of IOT and AI technologies, continuously puts us in front of new paradigms regarding design as a social activity. The totality of these technologies from the point of view of application describes a whole series of problems and considerations immanent to design thinking. Addressing these problems may require some initial intuition and the use of some provisional set of rules or plausible strategies, i.e., heuristic reasoning. At the same time, however, the evolution of digital technology and the computational speed of new design tools describe a new and contrary design framework in which to operate. It is therefore interesting to understand the opportunities and boundaries of the new man-algorithm relationship. The contribution investigates the man-algorithm relationship starting from the state of the art of the Made in Italy model, the most known fields of application are described and then focus on specific cases in which the mutual relationship between man and AI becomes a new driving force of innovation for entire production chains. On the other hand, the use of algorithms could engulf many design phases, such as the definition of shape, dimensions, proportions, materials, static verifications, and simulations. Operating in this context, therefore, becomes a strategic action, capable of defining fundamental choices for the design of product systems in the near future. If there is a human-algorithm combination within a new integrated system, quantitative values can be controlled in relation to qualitative and material values. The trajectory that is described therefore becomes a new design horizon in which to operate, where it is interesting to highlight the good practices that already exist. In this context, the designer developing new forms can experiment with ways still unexpressed in the project and can define a new synthesis and simplification of algorithms, so that each artifact has a signature in order to define in all its parts, emotional and structural. This signature of the designer, a combination of values and design culture, will be internal to the algorithms and able to relate to digital technologies, creating a generative dialogue for design purposes. The result that is envisaged indicates a new vision of digital technologies, no longer understood only as of the custodians of vast quantities of information, but also as a valid integrated tool in close relationship with the design culture.

Keywords: decision making, design euristics, product design, product design process, design paradigms

Procedia PDF Downloads 104
13200 An A-Star Approach for the Quickest Path Problem with Time Windows

Authors: Christofas Stergianos, Jason Atkin, Herve Morvan

Abstract:

As air traffic increases, more airports are interested in utilizing optimization methods. Many processes happen in parallel at an airport, and complex models are needed in order to have a reliable solution that can be implemented for ground movement operations. The ground movement for aircraft in an airport, allocating a path to each aircraft to follow in order to reach their destination (e.g. runway or gate), is one process that could be optimized. The Quickest Path Problem with Time Windows (QPPTW) algorithm has been developed to provide a conflict-free routing of vehicles and has been applied to routing aircraft around an airport. It was subsequently modified to increase the accuracy for airport applications. These modifications take into consideration specific characteristics of the problem, such as: the pushback process, which considers the extra time that is needed for pushing back an aircraft and turning its engines on; stand holding where any waiting should be allocated to the stand; and runway sequencing, where the sequence of the aircraft that take off is optimized and has to be respected. QPPTW involves searching for the quickest path by expanding the search in all directions, similarly to Dijkstra’s algorithm. Finding a way to direct the expansion can potentially assist the search and achieve a better performance. We have further modified the QPPTW algorithm to use a heuristic approach in order to guide the search. This new algorithm is based on the A-star search method but estimates the remaining time (instead of distance) in order to assess how far the target is. It is important to consider the remaining time that it is needed to reach the target, so that delays that are caused by other aircraft can be part of the optimization method. All of the other characteristics are still considered and time windows are still used in order to route multiple aircraft rather than a single aircraft. In this way the quickest path is found for each aircraft while taking into account the movements of the previously routed aircraft. After running experiments using a week of real aircraft data from Zurich Airport, the new algorithm (A-star QPPTW) was found to route aircraft much more quickly, being especially fast in routing the departing aircraft where pushback delays are significant. On average A-star QPPTW could route a full day (755 to 837 aircraft movements) 56% faster than the original algorithm. In total the routing of a full week of aircraft took only 12 seconds with the new algorithm, 15 seconds faster than the original algorithm. For real time application, the algorithm needs to be very fast, and this speed increase will allow us to add additional features and complexity, allowing further integration with other processes in airports and leading to more optimized and environmentally friendly airports.

Keywords: a-star search, airport operations, ground movement optimization, routing and scheduling

Procedia PDF Downloads 217
13199 Fatigue Analysis and Life Estimation of the Helicopter Horizontal Tail under Cyclic Loading by Using Finite Element Method

Authors: Defne Uz

Abstract:

Horizontal Tail of helicopter is exposed to repeated oscillatory loading generated by aerodynamic and inertial loads, and bending moments depending on operating conditions and maneuvers of the helicopter. In order to ensure that maximum stress levels do not exceed certain fatigue limit of the material and to prevent damage, a numerical analysis approach can be utilized through the Finite Element Method. Therefore, in this paper, fatigue analysis of the Horizontal Tail model is studied numerically to predict high-cycle and low-cycle fatigue life related to defined loading. The analysis estimates the stress field at stress concentration regions such as around fastener holes where the maximum principal stresses are considered for each load case. Critical element identification of the main load carrying structural components of the model with rivet holes is performed as a post-process since critical regions with high-stress values are used as an input for fatigue life calculation. Once the maximum stress is obtained at the critical element and the related mean and alternating components, it is compared with the endurance limit by applying Soderberg approach. The constant life straight line provides the limit for several combinations of mean and alternating stresses. The life calculation based on S-N (Stress-Number of Cycles) curve is also applied with fully reversed loading to determine the number of cycles corresponds to the oscillatory stress with zero means. The results determine the appropriateness of the design of the model for its fatigue strength and the number of cycles that the model can withstand for the calculated stress. The effect of correctly determining the critical rivet holes is investigated by analyzing stresses at different structural parts in the model. In the case of low life prediction, alternative design solutions are developed, and flight hours can be estimated for the fatigue safe operation of the model.

Keywords: fatigue analysis, finite element method, helicopter horizontal tail, life prediction, stress concentration

Procedia PDF Downloads 135
13198 To Examine Perceptions and Associations of Shock Food Labelling and to Assess the Impact on Consumer Behaviour: A Quasi-Experimental Approach

Authors: Amy Heaps, Amy Burns, Una McMahon-Beattie

Abstract:

Shock and fear tactics have been used to encourage consumer behaviour change within the UK regarding lifestyle choices such as smoking and alcohol abuse, yet such measures have not been applied to food labels to encourage healthier purchasing decisions. Obesity levels are continuing to rise within the UK, despite efforts made by government and charitable bodies to encourage consumer behavioural changes, which will have a positive influence on their fat, salt, and sugar intake. We know that taking extreme measures to shock consumers into behavioural changes has worked previously; for example, the anti-smoking television adverts and new standardised cigarette and tobacco packaging have reduced the numbers of the UK adult population who smoke or encouraged those who are currently trying to quit. The USA has also introduced new front-of-pack labelling, which is clear, easy to read, and includes concise health warnings on products high in fat, salt, or sugar. This model has been successful, with consumers reducing purchases of products with these warning labels present. Therefore, investigating if shock labels would have an impact on UK consumer behaviour and purchasing decisions would help to fill the gap within this research field. This study aims to develop an understanding of consumer’s initial responses to shock advertising with an interest in the perceived impact of long-term effect shock advertising on consumer food purchasing decisions, behaviour, and attitudes and will achieve this through a mixed methodological approach taken with a sample size of 25 participants ages ranging from 22 and 60. Within this research, shock mock labels were developed, including a graphic image, health warning, and get-help information. These labels were made for products (available within the UK) with large market shares which were high in either fat, salt, or sugar. The use of online focus groups and mouse-tracking experiments results helped to develop an understanding of consumer’s initial responses to shock advertising with interest in the perceived impact of long-term effect shock advertising on consumer food purchasing decisions, behaviour, and attitudes. Preliminary results have shown that consumers believe that the use of graphic images, combined with a health warning, would encourage consumer behaviour change and influence their purchasing decisions regarding those products which are high in fat, salt and sugar. Preliminary main findings show that graphic mock shock labels may have an impact on consumer behaviour and purchasing decisions, which will, in turn, encourage healthier lifestyles. Focus group results show that 72% of participants indicated that these shock labels would have an impact on their purchasing decisions. During the mouse tracking trials, this increased to 80% of participants, showing that more exposure to shock labels may have a bigger impact on potential consumer behaviour and purchasing decision change. In conclusion, preliminary results indicate that graphic shock labels will impact consumer purchasing decisions. Findings allow for a deeper understanding of initial emotional responses to these graphic labels. However, more research is needed to test the longevity of these labels on consumer purchasing decisions, but this research exercise is demonstrably the foundation for future detailed work.

Keywords: consumer behavior, decision making, labelling legislation, purchasing decisions, shock advertising, shock labelling

Procedia PDF Downloads 56
13197 Price Gouging in Time of Covid-19 Pandemic: When National Competition Agencies are Weak Institutions that Exacerbate the Effects of Exploitative Economic Behaviour

Authors: Cesar Leines

Abstract:

The social effects of the pandemic are significant and diverse, most of those effects have widened the gap of economic inequality. Without a doubt, each country faces difficulties associated with the strengths and weaknesses of its own institutions that can address these causes and consequences. Around the world, pricing practices that have no connection to production costs have been used extensively in numerous markets beyond those relating to the supply of essential goods and services, and although it is not unlawful to adjust pricing considering the increased demand of certain products, shortages and disruption of supply chains, illegitimate pricing practices may arise and these tend to transfer wealth from consumers to producers that affect the purchasing power of the former, making people worse off. High prices with no objective justification indicate a poor state of the competitive process in any market and the impact of those underlying competition issues leading to inefficiency is increased when national competition agencies are weak and ineffective in enforcing competition in law and policy. It has been observed that in those countries where competition authorities are perceived as weak or ineffective, price increases of a wide range of products and services were more significant during the pandemic than those price increases observed in countries where the perception of the effectiveness of the competition agency is high. When a perception is created of a highly effective competition authority, one which enforces competition law and its non-enforcement activities result in the fulfillment of its substantive functions of protecting competition as the means to create efficient markets, the price rise observed in markets under its jurisdiction is low. A case study focused on the effectiveness of the national competition agency in Mexico (COFECE) points to institutional weakness as one of the causes leading to excessive pricing. There are many factors that contribute to its low effectiveness and which, in turn, have led to a very significant price hike, potentiated by the pandemic. This paper contributes to the discussion of these factors and proposes different steps that overall help COFECE or any other competition agency to increase the perception of effectiveness for the benefit of the consumers.

Keywords: agency effectiveness, competition, institutional weakness, price gouging

Procedia PDF Downloads 165
13196 Effect of Roasting Temperature on the Proximate, Mineral and Antinutrient Content of Pigeon Pea (Cajanus cajan) Ready-to-Eat Snack

Authors: Olaide Ruth Aderibigbe, Oluwatoyin Oluwole

Abstract:

Pigeon pea is one of the minor leguminous plants; though underutilised, it is used traditionally by farmers to alleviate hunger and malnutrition. Pigeon pea is cultivated in Nigeria by subsistence farmers. It is rich in protein and minerals, however, its utilisation as food is only common among the poor and rural populace who cannot afford expensive sources of protein. One of the factors contributing to its limited use is the high antinutrient content which makes it indigestible, especially when eaten by children. The development of value-added products that can reduce the antinutrient content and make the nutrients more bioavailable will increase the utilisation of the crop and contribute to reduction of malnutrition. This research, therefore, determined the effects of different roasting temperatures (130 0C, 140 0C, and 150 0C) on the proximate, mineral and antinutrient component of a pigeon pea snack. The brown variety of pigeon pea seeds were purchased from a local market- Otto in Lagos, Nigeria. The seeds were cleaned, washed, and soaked in 50 ml of water containing sugar and salt (4:1) for 15 minutes, and thereafter the seeds were roasted at 130 0C, 140 0C, and 150 0C in an electric oven for 10 minutes. Proximate, minerals, phytate, tannin and alkaloid content analyses were carried out in triplicates following standard procedures. The results of the three replicates were polled and expressed as mean±standard deviation; a one-way analysis of variance (ANOVA) and the Least Significance Difference (LSD) were carried out. The roasting temperatures significantly (P<0.05) affected the protein, ash, fibre and carbohydrate content of the snack. Ready-to-eat snack prepared by roasting at 150 0C significantly had the highest protein (23.42±0.47%) compared the ones roasted at 130 0C and 140 0C (18.38±1.25% and 20.63±0.45%, respectively). The same trend was observed for the ash content (3.91±0.11 for 150 0C, 2.36±0.15 for 140 0C and 2.26±0.25 for 130 0C), while the fibre and carbohydrate contents were highest at roasting temperature of 130 0C. Iron, zinc, and calcium were not significantly (P<0.5) affected by the different roasting temperatures. Antinutrients decreased with increasing temperature. Phytate levels recorded were 0.02±0.00, 0.06±0.00, and 0.07±0.00 mg/g; tannin levels were 0.50±0.00, 0.57±0.00, and 0.68±0.00 mg/g, while alkaloids levels were 0.51±0.01, 0.78±0.01, and 0.82±0.01 mg/g for 150 0C, 140 0C, and 130 0C, respectively. These results show that roasting at high temperature (150 0C) can be utilised as a processing technique for increasing protein and decreasing antinutrient content of pigeon pea.

Keywords: antinutrients, pigeon pea, protein, roasting, underutilised species

Procedia PDF Downloads 121
13195 Spatial Distribution of Virus-Transmitting Aphids of Plants in Al Bahah Province, Saudi Arabia

Authors: Sabir Hussain, Muhammad Naeem, Yousif Aldryhim, Susan E. Halbert, Qingjun Wu

Abstract:

Plant viruses annually cause severe economic losses in crop production and globally, different aphid species are responsible for the transmission of such viruses. Additionally, aphids are also serious pests of trees, and agricultural crops. Al Bahah Province, Kingdom of Saudi Arabia (KSA) has a high native and introduced plant species with a temperate climate that provides ample habitats for aphids. In this study, we surveyed virus-transmitting aphids from the Province to highlight their spatial distributions and hot spot areas for their target control strategies. During our fifteen month's survey in Al Bahah Province, three hundred and seventy samples of aphids were collected using both beating sheets and yellow water pan traps. Consequently, fifty-four aphid species representing 30 genera belonging to four families were recorded from Al Bahah Province. Alarmingly, 35 aphid species from our records are virus transmitting species. The most common virus transmitting aphid species based on number of collecting samples, were Macrosiphum euphorbiae (Thomas, 1878), Brachycaudus rumexicolens (Patch, 1917), Uroleucon sonchi (Linnaeus, 1767), Brachycaudus helichrysi (Kaltenbach, 1843), and Myzus persicae (Sulzer, 1776). The numbers of samples for the forementioned species were 66, 24, 23, 22, and 20, respectively. The widest range of plant hosts were found for M. euphorbiae (39 plant species), B. helichrysi (12 plant species), M. persicae (12 plant species), B. rumexicolens (10 plant species), and U. sonchi (9 plant species). The hottest spot areas were found in Al-Baha, Al Mekhwah and Biljarashi cities of the province on the basis of their abundance. This study indicated that Al Bahah Province has relatively rich aphid diversity due to the relatively high plant diversity in a favorable climatic condition. ArcGIS tools can be helpful for biologists to implement the target control strategies against these pests in the integrated pest management, and ultimately to save money and time.

Keywords: Al Bahah province, aphid-virus interaction, biodiversity, global information system

Procedia PDF Downloads 170
13194 Enhance Concurrent Design Approach through a Design Methodology Based on an Artificial Intelligence Framework: Guiding Group Decision Making to Balanced Preliminary Design Solution

Authors: Loris Franchi, Daniele Calvi, Sabrina Corpino

Abstract:

This paper presents a design methodology in which stakeholders are assisted with the exploration of a so-called negotiation space, aiming to the maximization of both group social welfare and single stakeholder’s perceived utility. The outcome results in less design iterations needed for design convergence while obtaining a higher solution effectiveness. During the early stage of a space project, not only the knowledge about the system but also the decision outcomes often are unknown. The scenario is exacerbated by the fact that decisions taken in this stage imply delayed costs associated with them. Hence, it is necessary to have a clear definition of the problem under analysis, especially in the initial definition. This can be obtained thanks to a robust generation and exploration of design alternatives. This process must consider that design usually involves various individuals, who take decisions affecting one another. An effective coordination among these decision-makers is critical. Finding mutual agreement solution will reduce the iterations involved in the design process. To handle this scenario, the paper proposes a design methodology which, aims to speed-up the process of pushing the mission’s concept maturity level. This push up is obtained thanks to a guided negotiation space exploration, which involves autonomously exploration and optimization of trade opportunities among stakeholders via Artificial Intelligence algorithms. The negotiation space is generated via a multidisciplinary collaborative optimization method, infused by game theory and multi-attribute utility theory. In particular, game theory is able to model the negotiation process to reach the equilibria among stakeholder needs. Because of the huge dimension of the negotiation space, a collaborative optimization framework with evolutionary algorithm has been integrated in order to guide the game process to efficiently and rapidly searching for the Pareto equilibria among stakeholders. At last, the concept of utility constituted the mechanism to bridge the language barrier between experts of different backgrounds and differing needs, using the elicited and modeled needs to evaluate a multitude of alternatives. To highlight the benefits of the proposed methodology, the paper presents the design of a CubeSat mission for the observation of lunar radiation environment. The derived solution results able to balance all stakeholders needs and guaranteeing the effectiveness of the selection mission concept thanks to its robustness in valuable changeability. The benefits provided by the proposed design methodology are highlighted, and further development proposed.

Keywords: concurrent engineering, artificial intelligence, negotiation in engineering design, multidisciplinary optimization

Procedia PDF Downloads 118
13193 Heteroatom Doped Binary Metal Oxide Modified Carbon as a Bifunctional Electrocatalysts for all Vanadium Redox Flow Battery

Authors: Anteneh Wodaje Bayeh, Daniel Manaye Kabtamu, Chen-Hao Wang

Abstract:

As one of the most promising electrochemical energy storage systems, vanadium redox flow batteries (VRFBs) have received increasing attention owing to their attractive features for largescale storage applications. However, their high production cost and relatively low energy efficiency still limit their feasibility. For practical implementation, it is of great interest to improve their efficiency and reduce their cost. One of the key components of VRFBs that can greatly influence the efficiency and final cost is the electrode, which provide the reactions sites for redox couples (VO²⁺/VO₂ + and V²⁺/V³⁺). Carbon-based materials are considered to be the most feasible electrode materials in the VRFB because of their excellent potential in terms of operation range, good permeability, large surface area, and reasonable cost. However, owing to limited electrochemical activity and reversibility and poor wettability due to its hydrophobic properties, the performance of the cell employing carbon-based electrodes remained limited. To address the challenges, we synthesized heteroatom-doped bimetallic oxide grown on the surface of carbon through the one-step approach. When applied to VRFBs, the prepared electrode exhibits significant electrocatalytic effect toward the VO²⁺/VO₂ + and V³⁺/V²⁺ redox reaction compared with that of pristine carbon. It is found that the presence of heteroatom on metal oxide promotes the absorption of vanadium ions. The controlled morphology of bimetallic metal oxide also exposes more active sites for the redox reaction of vanadium ions. Hence, the prepared electrode displays the best electrochemical performance with energy and voltage efficiencies of 74.8% and 78.9%, respectively, which is much higher than those of 59.8% and 63.2% obtained from the pristine carbon at high current density. Moreover, the electrode exhibit durability and stability in an acidic electrolyte during long-term operation for 1000 cycles at the higher current density.

Keywords: VRFB, VO²⁺/VO₂ + and V³⁺/V²⁺ redox couples, graphite felt, heteroatom-doping

Procedia PDF Downloads 81
13192 Acceleration Techniques of DEM Simulation for Dynamics of Particle Damping

Authors: Masato Saeki

Abstract:

Presented herein is a novel algorithms for calculating the damping performance of particle dampers. The particle damper is a passive vibration control technique and has many practical applications due to simple design. It consists of granular materials constrained to move between two ends in the cavity of a primary vibrating system. The damping effect results from the exchange of momentum during the impact of granular materials against the wall of the cavity. This damping has the advantage of being independent of the environment. Therefore, particle damping can be applied in extreme temperature environments, where most conventional dampers would fail. It was shown experimentally in many papers that the efficiency of the particle dampers is high in the case of resonant vibration. In order to use the particle dampers effectively, it is necessary to solve the equations of motion for each particle, considering the granularity. The discrete element method (DEM) has been found to be effective for revealing the dynamics of particle damping. In this method, individual particles are assumed as rigid body and interparticle collisions are modeled by mechanical elements as springs and dashpots. However, the computational cost is significant since the equation of motion for each particle must be solved at each time step. In order to improve the computational efficiency of the DEM, the new algorithms are needed. In this study, new algorithms are proposed for implementing the high performance DEM. On the assumption that behaviors of the granular particles in the each divided area of the damper container are the same, the contact force of the primary system with all particles can be considered to be equal to the product of the divided number of the damper area and the contact force of the primary system with granular materials per divided area. This convenience makes it possible to considerably reduce the calculation time. The validity of this calculation method was investigated and the calculated results were compared with the experimental ones. This paper also presents the results of experimental studies of the performance of particle dampers. It is shown that the particle radius affect the noise level. It is also shown that the particle size and the particle material influence the damper performance.

Keywords: particle damping, discrete element method (DEM), granular materials, numerical analysis, equivalent noise level

Procedia PDF Downloads 447
13191 Purification and Characterization of a Novel Extracellular Chitinase from Bacillus licheniformis LHH100

Authors: Laribi-Habchi Hasiba, Bouanane-Darenfed Amel, Drouiche Nadjib, Pausse André, Mameri Nabil

Abstract:

Chitin, a linear 1, 4-linked N-acetyl-d-glucosamine (GlcNAc) polysaccharide is the major structural component of fungal cell walls, insect exoskeletons and shells of crustaceans. It is one of the most abundant naturally occurring polysaccharides and has attracted tremendous attention in the fields of agriculture, pharmacology and biotechnology. Each year, a vast amount of chitin waste is released from the aquatic food industry, where crustaceans (prawn, crab, Shrimp and lobster) constitute one of the main agricultural products. This creates a serious environmental problem. This linear polymer can be hydrolyzed by bases, acids or enzymes such as chitinase. In this context an extracellular chitinase (ChiA-65) was produced and purified from a newly isolated LHH100. Pure protein was obtained after heat treatment and ammonium sulphate precipitation followed by Sephacryl S-200 chromatography. Based on matrix assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF/MS) analysis, the purified enzyme is a monomer with a molecular mass of 65,195.13 Da. The sequence of the 27 N-terminal residues of the mature ChiA-65 showed high homology with family-18 chitinases. Optimal activity was achieved at pH 4 and 75◦C. Among the inhibitors and metals tested p-chloromercuribenzoic acid, N-ethylmaleimide, Hg2+ and Hg + completelyinhibited enzyme activity. Chitinase activity was high on colloidal chitin, glycol chitin, glycol chitosane, chitotriose and chitooligosaccharide. Chitinase activity towards synthetic substrates in the order of p-NP-(GlcNAc) n (n = 2–4) was p-NP-(GlcNAc)2> p-NP-(GlcNAc)4> p-NP-(GlcNAc)3. Our results suggest that ChiA-65 preferentially hydrolyzed the second glycosidic link from the non-reducing end of (GlcNAc) n. ChiA-65 obeyed Michaelis Menten kinetics the Km and kcat values being 0.385 mg, colloidal chitin/ml and5000 s−1, respectively. ChiA-65 exhibited remarkable biochemical properties suggesting that this enzyme is suitable for bioconversion of chitin waste.

Keywords: Bacillus licheniformis LHH100, characterization, extracellular chitinase, purification

Procedia PDF Downloads 427
13190 Comparison with Two Clinical Cases of Plasma Cell Neoplasm by Using the Method of Capillary Electrophoresis

Authors: Kai Pai Huang

Abstract:

Background: There are several types of plasma cell neoplasms including multiple myeloma, plasmacytoma, lymphoplasmacytic lymphoma, and monoclonal gammopathy of undetermined significance (MGUS) are found in our lab. Today, we want to compare with two cases using the method of capillary electrophoresis. Method: Serum is prepared and electrophoresis is performed at alkaline PH in a capillary using the Sebia® Capillary 2. Albumin and globulins are detected by the detector which is located in the cathode of the capillary and the signals are transformed to peaks. Serum was treated with beta-mercaptoethanol which reducing the polymerized immunoglobulin to monomer immunoglobulin to clarify two M-protein are secreted from the same plasma cell clone in bone marrow. Result: Case 1: A 78-year-old female presenting dysuria, oliguria and leg edema for several months. Laboratory data showed proteinuria, leukocytosis, results of high serum IgA and lambda light chain. A renal biopsy found amyloid fibrils in the glomerular mesangial area. Serum protein electrophoresis shows a major monoclonal peak in the β region and minor small peak in gamma region, and the immunotyping studies for serum showed two IgA/λ type. Case 2: A 55-year-old male presenting abdominal distension and low back pain for more than one month. Laboratory data showed T12 T8 compression fracture, results of high serum IgM and kappa light chain. Bone marrow aspiration showed the cells from the bone marrow are B cells with monotypic kappa chain expression. Bone marrow biopsy found this is lymphoplasmacytic lymphoma (Waldenstrom macroglobulin). Serum protein electrophoresis shows a monoclonal peak in the β region and the immunotyping studies for serum showed IgM/κ type. Conclusion: Plasma cell neoplasm can be diagnosed by many examinations. Among them, using capillary electrophoresis by a lab can separate several types of gammopathy and the quantification of a monoclonal peak can be used to evaluate the patients’ prognosis or treatment.

Keywords: plasma cell neoplasm, capillary electrophoresis, serum protein electrophoresis, immunotyping

Procedia PDF Downloads 133
13189 Transitional Separation Bubble over a Rounded Backward Facing Step Due to a Temporally Applied Very High Adverse Pressure Gradient Followed by a Slow Adverse Pressure Gradient Applied at Inlet of the Profile

Authors: Saikat Datta

Abstract:

Incompressible laminar time-varying flow is investigated over a rounded backward-facing step for a triangular piston motion at the inlet of a straight channel with very high acceleration, followed by a slow deceleration experimentally and through numerical simulation. The backward-facing step is an important test-case as it embodies important flow characteristics such as separation point, reattachment length, and recirculation of flow. A sliding piston imparts two successive triangular velocities at the inlet, constant acceleration from rest, 0≤t≤t0, and constant deceleration to rest, t0≤tKeywords: laminar boundary layer separation, rounded backward facing step, separation bubble, unsteady separation, unsteady vortex flows

Procedia PDF Downloads 58
13188 Preliminary Experience in Multiple Green Health Hospital Construction

Authors: Ming-Jyh Chen, Wen-Ming Huang, Yi-Chu Liu, Li-Hui Yang

Abstract:

Introduction: Social responsibility is the key to sustainable organizational development. Under the ground Green Health Hospital Declaration signed by our superintendent, we have launched comprehensive energy conservation management in medical services, the community, and the staff’s life. To execute environment-friendly promotion with robust strategies, we build up a low-carbon medical system and community with smart green public construction promotion as well as intensifying energy conservation education and communication. Purpose/Methods: With the support of the board and the superintendent, we construct an energy management team, commencing with an environment-friendly system, management, education, and ISO 50001 energy management system; we have ameliorated energy performance and energy efficiency and continuing. Results: In the year 2021, we have achieved multiple goals. The energy management system efficiently controls diesel, natural gas, and electricity usage. About 5% of the consumption is saved when compared to the numbers from 2018 and 2021. Our company develops intelligent services and promotes various paperless electronic operations to provide people with a vibrant and environmentally friendly lifestyle. The goal is to save 68.6% on printing and photocopying by reducing 35.15 million sheets of paper yearly. We strengthen the concept of environmental protection classification among colleagues. In the past two years, the amount of resource recycling has reached more than 650 tons, and the resource recycling rate has reached 70%. The annual growth rate of waste recycling is about 28 metric tons. Conclusions: To build a green medical system with “high efficacy, high value, low carbon, low reliance,” energy stewardship, economic prosperity, and social responsibility are our principles when it comes to formulation of energy conservation management strategies, converting limited sources to efficient usage, developing clean energy, and continuing with sustainable energy.

Keywords: energy efficiency, environmental education, green hospital, sustainable development

Procedia PDF Downloads 66
13187 Prioritization of Sub-Watersheds in Semi Arid Region: A Case Study of Shevgaon and Pathardi Tahsils in Maharashtra

Authors: Dadasaheb R. Jawre, Maya G. Unde

Abstract:

Prioritization of sub-watershed plays important role in watershed management. It shows the requirement of watershed to give a treatment for the green growth of the region and conservation of the sub-watersheds. There is a number of factors like topography of the region, climatic characteristics like rainfall and runoff, land-use land-cover, social factors which are related to the development of watershed for agricultural uses and domestic purposes in the region. The present research is throwing a focus on how morphometric parameters in association with GIS analysis will help in identifying the ranking of the sub-watersheds for further development which help of suggested watershed structures. Shevgaon and Pathardi tahsils are drought prone tahsils of Ahmednagar district in Maharashtra. These tahsils come under the semi-arid region. Sub-watershed prioritization is necessary for proper planning and management of natural resources for sustainable development of the study area. Less rainfall and increasing population pressure on the land as well as water resources lead to scarcity of the water in the region. Hence, researcher has selected Shevgaon and Pathardi tahsils for sub-watershed prioritization. There are seven sub-watersheds which selected for the present research paper. In the morphological analysis linear aspects, aerial aspects and relief aspects are considered for the prioritization. The largest sub-watershed is Erdha which is located at Karanji in Pathardi tahsil having an area of 145.06 km2 and smallest sub-watershed is Erandgaon which is located in Shevgaon tahsil having an area of 40.143 km2. For all seven sub-watersheds, seven morphometric parameters were considered for calculating the compound parameter values. Finally, compound parameter values are grouped into three groups such as, high priority (below 4.0), moderate priority (4.0 to 5.0) and low priority (above 5.0) according to the compound value Erandgaon, Chapadgaon and Tarak sub-watersheds comes under high priority group, Erdha and Domeshwar sub-watersheds come under moderate priority group and Chandani and Kasichi sub-watershed come under low priority group. Both the tahsils falls in drought prone area, after getting the watershed structure overall development of the region will take place.

Keywords: sub-watersheds, GIS and remote sensing, morphometric analysis, compound parameter value, prioritization

Procedia PDF Downloads 139
13186 Effects of Particle Size Distribution of Binders on the Performance of Slag-Limestone Ternary Cement

Authors: Zhuomin Zou, Thijs Van Landeghem, Elke Gruyaert

Abstract:

Using supplementary cementitious materials, such as blast-furnace slag and limestone, to replace cement clinker is a promising method to reduce the carbon emissions from cement production. To efficiently use slag and limestone, it is necessary to carefully select the particle size distribution (PSD) of the binders. This study investigated the effects of the PSD of binders on the performance of slag-limestone ternary cement. The Portland cement (PC) was prepared by grinding 95% clinker + 5% gypsum. Based on the PSD parameters of the binders, three types of ternary cements with a similar overall PSD were designed, i.e., NO.1 fine slag, medium PC, and coarse limestone; NO.2 fine limestone, medium PC, and coarse slag; NO.3. fine PC, medium slag, and coarse limestone. The binder contents in the ternary cements were (a) 50 % PC, 40 % slag, and 10 % limestone (called high cement group) or (b) 35 % PC, 55 % slag, and 10 % limestone (called low cement group). The pure PC and binary cement with 50% slag and 50% PC prepared with the same binders as the ternary cement were considered as reference cements. All these cements were used to investigate the mortar performance in terms of workability, strength at 2, 7, 28, and 90 days, carbonation resistance, and non-steady state chloride migration resistance at 28 and 56 days. Results show that blending medium PC with fine slag could exhibit comparable performance to blending fine PC with medium/coarse slag in binary cement. For the three ternary cements in the high cement group, ternary cement with fine limestone (NO.2) shows the lowest strength, carbonation, and chloride migration performance. Ternary cements with fine slag (NO.1) and with fine PC (NO.3) show the highest flexural strength at early and late ages, respectively. In addition, compared with ternary cement with fine PC (NO.3), ternary cement with fine slag (NO.1) has a similar carbonation resistance and a better chloride migration resistance. For the low cement group, three ternary cements have a similar flexural and compressive strength before 7 days. After 28 days, ternary cement with fine limestone (NO.2) shows the highest flexural strength while fine PC (NO.3) has the highest compressive strength. In addition, ternary cement with fine slag (NO.1) shows a better chloride migration resistance but a lower carbonation resistance compared with the other two ternary cements. Moreover, the durability performance of ternary cement with fine PC (NO.3) is better than that of fine limestone (NO.2).

Keywords: limestone, particle size distribution, slag, ternary cement

Procedia PDF Downloads 115
13185 Exact Phase Diagram of High-TC Superconductors

Authors: Abid Boudiar

Abstract:

We propose a simple model to obtain an exact expression of Tc/(Tc,max) for the temperature-doping phase diagram of superconducting cuprates. We showed that our model predicted most phase diagram scenario. We found the exact special doping points p(opt), p(qcp) and an accurate E(g,max). Some other properties such as the stripes length 100.1°A and the energy gap in cuprates chain 6meV can also be calculated exactly. Another interesting consequence of this simple picture is the new magic numbers and the ability to express everything using a (Tc,p) diagram via the golden ratio.

Keywords: superconducting cuprates, phase, pseudogap, hole doping, strips, golden ratio, soliton

Procedia PDF Downloads 460
13184 Reduction of Chlordecone Rates in Bioelectrochemicals Systems from Water and Sediment Swamp Mangrove in Absence of a Redox Mediator

Authors: Malory Beaujolais

Abstract:

Chlordecone is an organochlorine pesticide with a bishomocubane structure which led to high stability in organic matter. Microbial fuel cell is a type of electrochemical system that can convert organic matters into electricity thanks to electroactive bacteria. This technique has been used with mangrove swamp from Martinique to try to reduce chlordecone rates. Those experiments led to characterize the behavior of the electroactive biofilm formed at the cathode, without added redox mediator. The designed bioelectrochemical system seems to provide the necessary conditions for chlordecone degradation.

Keywords: bioelectrochemistry, bioremediation, chlordecone, mangrove swamp

Procedia PDF Downloads 25
13183 Bioinformatic Design of a Non-toxic Modified Adjuvant from the Native A1 Structure of Cholera Toxin with Membrane Synthetic Peptide of Naegleria fowleri

Authors: Frida Carrillo Morales, Maria Maricela Carrasco Yépez, Saúl Rojas Hernández

Abstract:

Naegleria fowleri is the causative agent of primary amebic meningoencephalitis, this disease is acute and fulminant that affects humans. It has been reported that despite the existence of therapeutic options against this disease, its mortality rate is 97%. Therefore, the need arises to have vaccines that confer protection against this disease and, in addition to developing adjuvants to enhance the immune response. In this regard, in our work group, we obtained a peptide designed from the membrane protein MP2CL5 of Naegleria fowleri called Smp145 that was shown to be immunogenic; however, it would be of great importance to enhance its immunological response, being able to co-administer it with a non-toxic adjuvant. Therefore, the objective of this work was to carry out the bioinformatic design of a peptide of the Naegleria fowleri membrane protein MP2CL5 conjugated with a non-toxic modified adjuvant from the native A1 structure of Cholera Toxin. For which different bioinformatics tools were used to obtain a model with a modification in amino acid 61 of the A1 subunit of the CT (CTA1), to which the Smp145 peptide was added and both molecules were joined with a 13-glycine linker. As for the results obtained, the modification in CTA1 bound to the peptide produces a reduction in the toxicity of the molecule in in silico experiments, likewise, the prediction in the binding of Smp145 to the receptor of B cells suggests that the molecule is directed in specifically to the BCR receptor, decreasing its native enzymatic activity. The stereochemical evaluation showed that the generated model has a high number of adequately predicted residues. In the ERRAT test, the confidence with which it is possible to reject regions that exceed the error values was evaluated, in the generated model, a high score was obtained, which determines that the model has a good structural resolution. Therefore, the design of the conjugated peptide in this work will allow us to proceed with its chemical synthesis and subsequently be able to use it in the mouse meningitis protection model caused by N. fowleri.

Keywords: immunology, vaccines, pathogens, infectious disease

Procedia PDF Downloads 72
13182 The Recommended Summary Plan for Emergency Care and Treatment (ReSPECT) Process: An Audit of Its Utilisation on a UK Tertiary Specialist Intensive Care Unit

Authors: Gokulan Vethanayakam, Daniel Aston

Abstract:

Introduction: The ReSPECT process supports healthcare professionals when making patient-centered decisions in the event of an emergency. It has been widely adopted by the NHS in England and allows patients to express thoughts and wishes about treatments and outcomes that they consider acceptable. It includes (but is not limited to) cardiopulmonary resuscitation decisions. ReSPECT conversations should ideally occur prior to ICU admission and should be documented in the eight sections of the nationally-standardised ReSPECT form. This audit evaluated the use of ReSPECT on a busy cardiothoracic ICU in an NHS Trust where established policies advocating its use exist. Methods: This audit was a retrospective review of ReSPECT forms for a sample of high-risk patients admitted to ICU at the Royal Papworth Hospital between January 2021 and March 2022. Patients all received one of the following interventions: Veno-Venous Extra-Corporeal Membrane Oxygenation (VV-ECMO) for severe respiratory failure (retrieved via the national ECMO service); cardiac or pulmonary transplantation-related surgical procedures (including organ transplants and Ventricular Assist Device (VAD) implantation); or elective non-transplant cardiac surgery. The quality of documentation on ReSPECT forms was evaluated using national standards and a graded ranking tool devised by the authors which was used to assess narrative aspects of the forms. Quality was ranked as A (excellent) to D (poor). Results: Of 230 patients (74 VV-ECMO, 104 transplant, 52 elective non-transplant surgery), 43 (18.7%) had a ReSPECT form and only one (0.43%) patient had a ReSPECT form completed prior to ICU admission. Of the 43 forms completed, 38 (88.4%) were completed due to the commencement of End of Life (EoL) care. No non-transplant surgical patients included in the audit had a ReSPECT form. There was documentation of balance of care (section 4a), CPR status (section 4c), capacity assessment (section 5), and patient involvement in completing the form (section 6a) on all 43 forms. Of the 34 patients assessed as lacking capacity to make decisions, only 22 (64.7%) had reasons documented. Other sections were variably completed; 29 (67.4%) forms had relevant background information included to a good standard (section 2a). Clinical guidance for the patient (section 4b) was given in 25 (58.1%), of which 11 stated the rationale that underpinned it. Seven forms (16.3%) contained information in an inappropriate section. In a comparison of ReSPECT forms completed ahead of an EoL trigger with those completed when EoL care began, there was a higher number of entries in section 3 (considering patient’s values/fears) that were assessed at grades A-B in the former group (p = 0.014), suggesting higher quality. Similarly, forms from the transplant group contained higher quality information in section 3 than those from the VV-ECMO group (p = 0.0005). Conclusions: Utilisation of the ReSPECT process in high-risk patients is yet to be well-adopted in this trust. Teams who meet patients before hospital admission for transplant or high-risk surgery should be encouraged to engage with the ReSPECT process at this point in the patient's journey. VV-ECMO retrieval teams should consider ReSPECT conversations with patients’ relatives at the time of retrieval.

Keywords: audit, critical care, end of life, ICU, ReSPECT, resuscitation

Procedia PDF Downloads 57
13181 In-Plume H₂O, CO₂, H₂S and SO₂ in the Fumarolic Field of La Fossa Cone (Vulcano Island, Aeolian Archipelago)

Authors: Cinzia Federico, Gaetano Giudice, Salvatore Inguaggiato, Marco Liuzzo, Maria Pedone, Fabio Vita, Christoph Kern, Leonardo La Pica, Giovannella Pecoraino, Lorenzo Calderone, Vincenzo Francofonte

Abstract:

The periods of increased fumarolic activity at La Fossa volcano have been characterized, since early 80's, by changes in the gas chemistry and in the output rate of fumaroles. Excepting the direct measurements of the steam output from fumaroles performed from 1983 to 1995, the mass output of the single gas species has been recently measured, with various methods, only sporadically or for short periods. Since 2008, a scanning DOAS system is operating in the Palizzi area for the remote measurement of the in-plume SO₂ flux. On these grounds, the need of a cross-comparison of different methods for the in situ measurement of the output rate of different gas species is envisaged. In 2015, two field campaigns have been carried out, aimed at: 1. The mapping of the concentration of CO₂, H₂S and SO₂ in the fumarolic plume at 1 m from the surface, by using specific open-path diode tunable lasers (GasFinder Boreal Europe Ltd.) and an Active DOAS for SO₂, respectively; these measurements, coupled to simultaneous ultrasonic wind speed and meteorological data, have been elaborated to obtain the dispersion map and the output rate of single species in the overall fumarolic field; 2. The mapping of the concentrations of CO₂, H₂S, SO₂, H₂O in the fumarolic plume at 0.5 m from the soil, by using an integrated system, including IR spectrometers and specific electrochemical sensors; this has provided the concentration ratios of the analysed gas species and their distribution in the fumarolic field; 3. The in-fumarole sampling of vapour and measurement of the steam output, to validate the remote measurements. The dispersion map of CO₂, obtained from the tunable laser measurements, shows a maximum CO₂ concentration at 1m from the soil of 1000 ppmv along the rim, and 1800 ppmv in the inner slopes. As observed, the largest contribution derives from a wide fumarole of the inner-slope, despite its present outlet temperature of 230°C, almost 200°C lower than those measured at the rim fumaroles. Actually, fumaroles in the inner slopes are among those emitting the largest amount of magmatic vapour and, during the 1989-1991 crisis, reached the temperature of 690°C. The estimated CO₂ and H₂S fluxes are 400 t/d and 4.4 t/d, respectively. The coeval SO₂ flux, measured by the scanning DOAS system, is 9±1 t/d. The steam output, recomputed from CO₂ flux measurements, is about 2000 t/d. The various direct and remote methods (as described at points 1-3) have produced coherent results, which encourage to the use of daily and automatic DOAS SO₂ data, coupled with periodic in-plume measurements of different acidic gases, to obtain the total mass rates.

Keywords: DOAS, fumaroles, plume, tunable laser

Procedia PDF Downloads 385
13180 Multiscale Process Modeling of Ceramic Matrix Composites

Authors: Marianna Maiaru, Gregory M. Odegard, Josh Kemppainen, Ivan Gallegos, Michael Olaya

Abstract:

Ceramic matrix composites (CMCs) are typically used in applications that require long-term mechanical integrity at elevated temperatures. CMCs are usually fabricated using a polymer precursor that is initially polymerized in situ with fiber reinforcement, followed by a series of cycles of pyrolysis to transform the polymer matrix into a rigid glass or ceramic. The pyrolysis step typically generates volatile gasses, which creates porosity within the polymer matrix phase of the composite. Subsequent cycles of monomer infusion, polymerization, and pyrolysis are often used to reduce the porosity and thus increase the durability of the composite. Because of the significant expense of such iterative processing cycles, new generations of CMCs with improved durability and manufacturability are difficult and expensive to develop using standard Edisonian approaches. The goal of this research is to develop a computational process-modeling-based approach that can be used to design the next generation of CMC materials with optimized material and processing parameters for maximum strength and efficient manufacturing. The process modeling incorporates computational modeling tools, including molecular dynamics (MD), to simulate the material at multiple length scales. Results from MD simulation are used to inform the continuum-level models to link molecular-level characteristics (material structure, temperature) to bulk-level performance (strength, residual stresses). Processing parameters are optimized such that process-induced residual stresses are minimized and laminate strength is maximized. The multiscale process modeling method developed with this research can play a key role in the development of future CMCs for high-temperature and high-strength applications. By combining multiscale computational tools and process modeling, new manufacturing parameters can be established for optimal fabrication and performance of CMCs for a wide range of applications.

Keywords: digital engineering, finite elements, manufacturing, molecular dynamics

Procedia PDF Downloads 86
13179 Study of the Transport of ²²⁶Ra Colloidal in Mining Context Using a Multi-Disciplinary Approach

Authors: Marine Reymond, Michael Descostes, Marie Muguet, Clemence Besancon, Martine Leermakers, Catherine Beaucaire, Sophie Billon, Patricia Patrier

Abstract:

²²⁶Ra is one of the radionuclides resulting from the disintegration of ²³⁸U. Due to its half-life (1600 y) and its high specific activity (3.7 x 1010 Bq/g), ²²⁶Ra is found at the ultra-trace level in the natural environment (usually below 1 Bq/L, i.e. 10-13 mol/L). Because of its decay in ²²²Rn, a radioactive gas with a shorter half-life (3.8 days) which is difficult to control and dangerous for humans when inhaled, ²²⁶Ra is subject to a dedicated monitoring in surface waters especially in the context of uranium mining. In natural waters, radionuclides occur in dissolved, colloidal or particular forms. Due to the size of colloids, generally ranging between 1 nm and 1 µm and their high specific surface areas, the colloidal fraction could be involved in the transport of trace elements, including radionuclides in the environment. The colloidal fraction is not always easy to determine and few existing studies focus on ²²⁶Ra. In the present study, a complete multidisciplinary approach is proposed to assess the colloidal transport of ²²⁶Ra. It includes water sampling by conventional filtration (0.2µm) and the innovative Diffusive Gradient in Thin Films technique to measure the dissolved fraction (<10nm), from which the colloidal fraction could be estimated. Suspended matter in these waters were also sampled and characterized mineralogically by X-Ray Diffraction, infrared spectroscopy and scanning electron microscopy. All of these data, which were acquired on a rehabilitated former uranium mine, allowed to build a geochemical model using the geochemical calculation code PhreeqC to describe, as accurately as possible, the colloidal transport of ²²⁶Ra. Colloidal transport of ²²⁶Ra was found, for some of the sampling points, to account for up to 95% of the total ²²⁶Ra measured in water. Mineralogical characterization and associated geochemical modelling highlight the role of barite, a barium sulfate mineral well known to trap ²²⁶Ra into its structure. Barite was shown to be responsible for the colloidal ²²⁶Ra fraction despite the presence of kaolinite and ferrihydrite, which are also known to retain ²²⁶Ra by sorption.

Keywords: colloids, mining context, radium, transport

Procedia PDF Downloads 142
13178 Topographic Characteristics Derived from UAV Images to Detect Ephemeral Gully Channels

Authors: Recep Gundogan, Turgay Dindaroglu, Hikmet Gunal, Mustafa Ulukavak, Ron Bingner

Abstract:

A majority of total soil losses in agricultural areas could be attributed to ephemeral gullies caused by heavy rains in conventionally tilled fields; however, ephemeral gully erosion is often ignored in conventional soil erosion assessments. Ephemeral gullies are often easily filled from normal soil tillage operations, which makes capturing the existing ephemeral gullies in croplands difficult. This study was carried out to determine topographic features, including slope and aspect composite topographic index (CTI) and initiation points of gully channels, using images obtained from unmanned aerial vehicle (UAV) images. The study area was located in Topcu stream watershed in the eastern Mediterranean Region, where intense rainfall events occur over very short time periods. The slope varied between 0.7 and 99.5%, and the average slope was 24.7%. The UAV (multi-propeller hexacopter) was used as the carrier platform, and images were obtained with the RGB camera mounted on the UAV. The digital terrain models (DTM) of Topçu stream micro catchment produced using UAV images and manual field Global Positioning System (GPS) measurements were compared to assess the accuracy of UAV based measurements. Eighty-one gully channels were detected in the study area. The mean slope and CTI values in the micro-catchment obtained from DTMs generated using UAV images were 19.2% and 3.64, respectively, and both slope and CTI values were lower than those obtained using GPS measurements. The total length and volume of the gully channels were 868.2 m and 5.52 m³, respectively. Topographic characteristics and information on ephemeral gully channels (location of initial point, volume, and length) were estimated with high accuracy using the UAV images. The results reveal that UAV-based measuring techniques can be used in lieu of existing GPS and total station techniques by using images obtained with high-resolution UAVs.

Keywords: aspect, compound topographic index, digital terrain model, initial gully point, slope, unmanned aerial vehicle

Procedia PDF Downloads 100
13177 Mesoporous Na2Ti3O7 Nanotube-Constructed Materials with Hierarchical Architecture: Synthesis and Properties

Authors: Neumoin Anton Ivanovich, Opra Denis Pavlovich

Abstract:

Materials based on titanium oxide compounds are widely used in such areas as solar energy, photocatalysis, food industry and hygiene products, biomedical technologies, etc. Demand for them has also formed in the battery industry (an example of this is the commercialization of Li4Ti5O12), where much attention has recently been paid to the development of next-generation systems and technologies, such as sodium-ion batteries. This dictates the need to search for new materials with improved characteristics, as well as ways to obtain them that meet the requirements of scalability. One of the ways to solve these problems can be the creation of nanomaterials that often have a complex of physicochemical properties that radically differ from the characteristics of their counterparts in the micro- or macroscopic state. At the same time, it is important to control the texture (specific surface area, porosity) of such materials. In view of the above, among other methods, the hydrothermal technique seems to be suitable, allowing a wide range of control over the conditions of synthesis. In the present study, a method was developed for the preparation of mesoporous nanostructured sodium trititanate (Na2Ti3O7) with a hierarchical architecture. The materials were synthesized by hydrothermal processing and exhibit a complex hierarchically organized two-layer architecture. At the first level of the hierarchy, materials are represented by particles having a roughness surface, and at the second level, by one-dimensional nanotubes. The products were found to have high specific surface area and porosity with a narrow pore size distribution (about 6 nm). As it is known, the specific surface area and porosity are important characteristics of functional materials, which largely determine the possibilities and directions of their practical application. Electrochemical impedance spectroscopy data show that the resulting sodium trititanate has a sufficiently high electrical conductivity. As expected, the synthesized complexly organized nanoarchitecture based on sodium trititanate with a porous structure can be practically in demand, for example, in the field of new generation electrochemical storage and energy conversion devices.

Keywords: sodium trititanate, hierarchical materials, mesoporosity, nanotubes, hydrothermal synthesis

Procedia PDF Downloads 96
13176 Systematic Review of Quantitative Risk Assessment Tools and Their Effect on Racial Disproportionality in Child Welfare Systems

Authors: Bronwen Wade

Abstract:

Over the last half-century, child welfare systems have increasingly relied on quantitative risk assessment tools, such as actuarial or predictive risk tools. These tools are developed by performing statistical analysis of how attributes captured in administrative data are related to future child maltreatment. Some scholars argue that attributes in administrative data can serve as proxies for race and that quantitative risk assessment tools reify racial bias in decision-making. Others argue that these tools provide more “objective” and “scientific” guides for decision-making instead of subjective social worker judgment. This study performs a systematic review of the literature on the impact of quantitative risk assessment tools on racial disproportionality; it examines methodological biases in work on this topic, summarizes key findings, and provides suggestions for further work. A search of CINAHL, PsychInfo, Proquest Social Science Premium Collection, and the ProQuest Dissertations and Theses Collection was performed. Academic and grey literature were included. The review includes studies that use quasi-experimental methods and development, validation, or re-validation studies of quantitative risk assessment tools. PROBAST (Prediction model Risk of Bias Assessment Tool) and CHARMS (CHecklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies) were used to assess the risk of bias and guide data extraction for risk development, validation, or re-validation studies. ROBINS-I (Risk of Bias in Non-Randomized Studies of Interventions) was used to assess for bias and guide data extraction for the quasi-experimental studies identified. Due to heterogeneity among papers, a meta-analysis was not feasible, and a narrative synthesis was conducted. 11 papers met the eligibility criteria, and each has an overall high risk of bias based on the PROBAST and ROBINS-I assessments. This is deeply concerning, as major policy decisions have been made based on a limited number of studies with a high risk of bias. The findings on racial disproportionality have been mixed and depend on the tool and approach used. Authors use various definitions for racial equity, fairness, or disproportionality. These concepts of statistical fairness are connected to theories about the reason for racial disproportionality in child welfare or social definitions of fairness that are usually not stated explicitly. Most findings from these studies are unreliable, given the high degree of bias. However, some of the less biased measures within studies suggest that quantitative risk assessment tools may worsen racial disproportionality, depending on how disproportionality is mathematically defined. Authors vary widely in their approach to defining and addressing racial disproportionality within studies, making it difficult to generalize findings or approaches across studies. This review demonstrates the power of authors to shape policy or discourse around racial justice based on their choice of statistical methods; it also demonstrates the need for improved rigor and transparency in studies of quantitative risk assessment tools. Finally, this review raises concerns about the impact that these tools have on child welfare systems and racial disproportionality.

Keywords: actuarial risk, child welfare, predictive risk, racial disproportionality

Procedia PDF Downloads 39
13175 Leveraging Remote Sensing Information for Drought Disaster Risk Management

Authors: Israel Ropo Orimoloye, Johanes A. Belle, Olusola Adeyemi, Olusola O. Ololade

Abstract:

With more than 100,000 orbits during the past 20 years, Terra has significantly improved our knowledge of the Earth's climate and its implications on societies and ecosystems of human activity and natural disasters, including drought events. With Terra instrument's performance and the free distribution of its products, this study utilised Terra MOD13Q1 satellite data to assess drought disaster events and its spatiotemporal patterns over the Free State Province of South Africa between 2001 and 2019 for summer, autumn, winter, and spring seasons. The study also used high-resolution downscaled climate change projections under three representative concentration pathways (RCP). Three future periods comprising the short (the 2030s), medium (2040s), and long term (2050s) compared to the current period are analysed to understand the potential magnitude of projected climate change-related drought. The study revealed that the year 2001 and 2016 witnessed extreme drought conditions where the drought index is between 0 and 20% across the entire province during summer, while the year 2003, 2004, 2007, and 2015 observed severe drought conditions across the region with variation from one part to the another. The result shows that from -24.5 to -25.5 latitude, the area witnessed a decrease in precipitation (80 to 120mm) across the time slice and an increase in the latitude -26° to -28° S for summer seasons, which is more prominent in the year 2041 to 2050. This study emphasizes the strong spatio-environmental impacts within the province and highlights the associated factors that characterise high drought stress risk, especially on the environment and ecosystems. This study contributes to a disaster risk framework to identify areas for specific research and adaptation activities on drought disaster risk and for environmental planning in the study area, which is characterised by both rural and urban contexts, to address climate change-related drought impacts.

Keywords: remote sensing, drought disaster, climate scenario, assessment

Procedia PDF Downloads 176
13174 Cinema and the Documentation of Mass Killings in Third World Countries: A Study of Selected African Films

Authors: Chijindu D. Mgbemere

Abstract:

Mass killing also known as genocide is the systematic killing of people from national, ethnic, or religious group, or an attempt to do so. The act has been there before 1948, when it was officially recognized for what it is. From then, the world has continued to witness genocide in diverse forms- negating different measures by the United Nations and its agencies to curb it. So far, all the studies and documentations on this subject are biased in favor of radio and the print. This paper therefore extended the interrogation of genocide, drumming its devastating effects, using the film medium; and in doing so devised innovative and pragmatic approach to genocide scholarship. It further centered attention on the factors and impacts of genocide, with a view to determine how effective film can be in such a study. The study is anchored on Bateson’s Framing Theory. Four films- Hotel Rwanda, Half of a Yellow Sun, Attack on Darfur, and sarafina, were analyzed, based on background, factors/causes, impacts, and development of genocide, via Content Analysis. The study discovered that: as other continents strive towards peace, acts of genocide are on the increase in African. Bloodletting stereotypes give Africa negative image in the global society. Difficult political frameworks, the trauma of postcolonial state, aggravated by ethnic and religious intolerance, and limited access to resources are responsible for high cases of genocide in Africa. The media, international communities, and peace agencies often abet other than prevent genocide or mass killings in Africa. High human casualty and displacement, children soldering, looting, hunger, rape, sex-slavery and abuse, mental and psychosomatic stress disorders are some of the impacts of genocide. Genocidaires are either condemned or killed. Grievances can be vented using civil resistance, negotiation, adjudication, arbitration, and mediation. The cinema is an effective means of studying and documenting genocide. Africans must factor the image laundering of their continent into consideration. Punishment of genocidaires without an attempt to de-radicalize them is counterproductive.

Keywords: African film, genocide, framing theory, mass murder

Procedia PDF Downloads 108
13173 Thermal Decomposition Behaviors of Hexafluoroethane (C2F6) Using Zeolite/Calcium Oxide Mixtures

Authors: Kazunori Takai, Weng Kaiwei, Sadao Araki, Hideki Yamamoto

Abstract:

HFC and PFC gases have been commonly and widely used as refrigerant of air conditioner and as etching agent of semiconductor manufacturing process, because of their higher heat of vaporization and chemical stability. On the other hand, HFCs and PFCs gases have the high global warming effect on the earth. Therefore, we have to be decomposed these gases emitted from chemical apparatus like as refrigerator. Until now, disposal of these gases were carried out by using combustion method like as Rotary kiln treatment mainly. However, this treatment needs extremely high temperature over 1000 °C. In the recent year, in order to reduce the energy consumption, a hydrolytic decomposition method using catalyst and plasma decomposition treatment have been attracted much attention as a new disposal treatment. However, the decomposition of fluorine-containing gases under the wet condition is not able to avoid the generation of hydrofluoric acid. Hydrofluoric acid is corrosive gas and it deteriorates catalysts in the decomposition process. Moreover, an additional process for the neutralization of hydrofluoric acid is also indispensable. In this study, the decomposition of C2F6 using zeolite and zeolite/CaO mixture as reactant was evaluated in the dry condition at 923 K. The effect of the chemical structure of zeolite on the decomposition reaction was confirmed by using H-Y, H-Beta, H-MOR and H-ZSM-5. The formation of CaF2 in zeolite/CaO mixtures after the decomposition reaction was confirmed by XRD measurements. The decomposition of C2F6 using zeolite as reactant showed the closely similar behaviors regardless the type of zeolite (MOR, Y, ZSM-5, Beta type). There was no difference of XRD patterns of each zeolite before and after reaction. On the other hand, the difference in the C2F6 decomposition for each zeolite/CaO mixtures was observed. These results suggested that the rate-determining process for the C2F6 decomposition on zeolite alone is the removal of fluorine from reactive site. In other words, the C2F6 decomposition for the zeolite/CaO improved compared with that for the zeolite alone by the removal of the fluorite from reactive site. HMOR/CaO showed 100% of the decomposition for 3.5 h and significantly improved from zeolite alone. On the other hand, Y type zeolite showed no improvement, that is, the almost same value of Y type zeolite alone. The descending order of C2F6 decomposition was MOR, ZSM-5, beta and Y type zeolite. This order is similar to the acid strength characterized by NH3-TPD. Hence, it is considered that the C-F bond cleavage is closely related to the acid strength.

Keywords: hexafluoroethane, zeolite, calcium oxide, decomposition

Procedia PDF Downloads 461