Search results for: laminar boundary layer separation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4818

Search results for: laminar boundary layer separation

348 CeO₂-Decorated Graphene-coated Nickel Foam with NiCo Layered Double Hydroxide for Efficient Hydrogen Evolution Reaction

Authors: Renzhi Qi, Zhaoping Zhong

Abstract:

Under the dual pressure of the global energy crisis and environmental pollution, avoiding the consumption of non-renewable fossil fuels based on carbon as the energy carrier and developing and utilizing non-carbon energy carriers are the basic requirements for the future new energy economy. Electrocatalyst for water splitting plays an important role in building sustainable and environmentally friendly energy conversion. The oxygen evolution reaction (OER) is essentially limited by the slow kinetics of multi-step proton-electron transfer, which limits the efficiency and cost of water splitting. In this work, CeO₂@NiCo-NRGO/NF hybrid materials were prepared using nickel foam (NF) and nitrogen-doped reduced graphene oxide (NRGO) as conductive substrates by multi-step hydrothermal method and were used as highly efficient catalysts for OER. The well-connected nanosheet array forms a three-dimensional (3D) network on the substrate, providing a large electrochemical surface area with abundant catalytic active sites. The doping of CeO₂ in NiCo-NRGO/NF electrocatalysts promotes the dispersion of substances and its synergistic effect in promoting the activation of reactants, which is crucial for improving its catalytic performance against OER. The results indicate that CeO₂@NiCo-NRGO/NF only requires a lower overpotential of 250 mV to drive the current density of 10 mA cm-2 for an OER reaction of 1 M KOH, and exhibits excellent stability at this current density for more than 10 hours. The double layer capacitance (Cdl) values show that CeO₂@NiCo-NRGO/NF significantly affects the interfacial conductivity and electrochemically active surface area. The hybrid structure could promote the catalytic performance of oxygen evolution reaction, such as low initial potential, high electrical activity, and excellent long-term durability. The strategy for improving the catalytic activity of NiCo-LDH can be used to develop a variety of other electrocatalysts for water splitting.

Keywords: CeO₂, reduced graphene oxide, NiCo-layered double hydroxide, oxygen evolution reaction

Procedia PDF Downloads 82
347 Impact of the Oxygen Content on the Optoelectronic Properties of the Indium-Tin-Oxide Based Transparent Electrodes for Silicon Heterojunction Solar Cells

Authors: Brahim Aissa

Abstract:

Transparent conductive oxides (TCOs) used as front electrodes in solar cells must feature simultaneously high electrical conductivity, low contact resistance with the adjacent layers, and an appropriate refractive index for maximal light in-coupling into the device. However, these properties may conflict with each other, motivating thereby the search for TCOs with high performance. Additionally, due to the presence of temperature sensitive layers in many solar cell designs (for example, in thin-film silicon and silicon heterojunction (SHJ)), low-temperature deposition processes are more suitable. Several deposition techniques have been already explored to fabricate high-mobility TCOs at low temperatures, including sputter deposition, chemical vapor deposition, and atomic layer deposition. Among this variety of methods, to the best of our knowledge, magnetron sputtering deposition is the most established technique, despite the fact that it can lead to damage of underlying layers. The Sn doped In₂O₃ (ITO) is the most commonly used transparent electrode-contact in SHJ technology. In this work, we studied the properties of ITO thin films grown by RF sputtering. Using different oxygen fraction in the argon/oxygen plasma, we prepared ITO films deposited on glass substrates, on one hand, and on a-Si (p and n-types):H/intrinsic a-Si/glass substrates, on the other hand. Hall Effect measurements were systematically conducted together with total-transmittance (TT) and total-reflectance (TR) spectrometry. The electrical properties were drastically affected whereas the TT and TR were found to be slightly impacted by the oxygen variation. Furthermore, the time of flight-secondary ion mass spectrometry (TOF-SIMS) technique was used to determine the distribution of various species throughout the thickness of the ITO and at various interfaces. The depth profiling of indium, oxygen, tin, silicon, phosphorous, boron and hydrogen was investigated throughout the various thicknesses and interfaces, and obtained results are discussed accordingly. Finally, the extreme conditions were selected to fabricate rear emitter SHJ devices, and the photovoltaic performance was evaluated; the lower oxygen flow ratio was found to yield the best performance attributed to lower series resistance.

Keywords: solar cell, silicon heterojunction, oxygen content, optoelectronic properties

Procedia PDF Downloads 159
346 Evaluating the Small-Strain Mechanical Properties of Cement-Treated Clayey Soils Based on the Confining Pressure

Authors: Muhammad Akmal Putera, Noriyuki Yasufuku, Adel Alowaisy, Ahmad Rifai

Abstract:

Indonesia’s government has planned a project for a high-speed railway connecting the capital cities, Jakarta and Surabaya, about 700 km. Based on that location, it has been planning construction above the lowland soil region. The lowland soil region comprises cohesive soil with high water content and high compressibility index, which in fact, led to a settlement problem. Among the variety of railway track structures, the adoption of the ballastless track was used effectively to reduce the settlement; it provided a lightweight structure and minimized workspace. Contradictorily, deploying this thin layer structure above the lowland area was compensated with several problems, such as lack of bearing capacity and deflection behavior during traffic loading. It is necessary to combine with ground improvement to assure a settlement behavior on the clayey soil. Reflecting on the assurance of strength increment and working period, those were convinced by adopting methods such as cement-treated soil as the substructure of railway track. Particularly, evaluating mechanical properties in the field has been well known by using the plate load test and cone penetration test. However, observing an increment of mechanical properties has uncertainty, especially for evaluating cement-treated soil on the substructure. The current quality control of cement-treated soils was established by laboratory tests. Moreover, using small strain devices measurement in the laboratory can predict more reliable results that are identical to field measurement tests. Aims of this research are to show an intercorrelation of confining pressure with the initial condition of the Young modulus (E_o), Poisson ratio (υ_o) and Shear modulus (G_o) within small strain ranges. Furthermore, discrepancies between those parameters were also investigated. Based on the experimental result confirmed the intercorrelation between cement content and confining pressure with a power function. In addition, higher cement ratios have discrepancies, conversely with low mixing ratios.

Keywords: amount of cement, elastic zone, high-speed railway, lightweight structure

Procedia PDF Downloads 141
345 Nanofiltration Membranes with Deposyted Polyelectrolytes: Caracterisation and Antifouling Potential

Authors: Viktor Kochkodan

Abstract:

The main problem arising upon water treatment and desalination using pressure driven membrane processes such as microfiltration, ultrafiltration, nanofiltration and reverse osmosis is membrane fouling that seriously hampers the application of the membrane technologies. One of the main approaches to mitigate membrane fouling is to minimize adhesion interactions between a foulant and a membrane and the surface coating of the membranes with polyelectrolytes seems to be a simple and flexible technique to improve the membrane fouling resistance. In this study composite polyamide membranes NF-90, NF-270, and BW-30 were modified using electrostatic deposition of polyelectrolyte multilayers made from various polycationic and polyanionic polymers of different molecular weights. Different anionic polyelectrolytes such as: poly(sodium 4-styrene sulfonate), poly(vinyl sulfonic acid, sodium salt), poly(4-styrene sulfonic acid-co-maleic acid) sodium salt, poly(acrylic acid) sodium salt (PA) and cationic polyelectrolytes such as poly(diallyldimethylammonium chloride), poly(ethylenimine) and poly(hexamethylene biguanide were used for membrane modification. An effect of deposition time and a number of polyelectrolyte layers on the membrane modification has been evaluated. It was found that degree of membrane modification depends on chemical nature and molecular weight of polyelectrolytes used. The surface morphology of the prepared composite membranes was studied using atomic force microscopy. It was shown that the surface membrane roughness decreases significantly as a number of the polyelectrolyte layers on the membrane surface increases. This smoothening of the membrane surface might contribute to the reduction of membrane fouling as lower roughness most often associated with a decrease in surface fouling. Zeta potentials and water contact angles on the membrane surface before and after modification have also been evaluated to provide addition information regarding membrane fouling issues. It was shown that the surface charge of the membranes modified with polyelectrolytes could be switched between positive and negative after coating with a cationic or an anionic polyelectrolyte. On the other hand, the water contact angle was strongly affected when the outermost polyelectrolyte layer was changed. Finally, a distinct difference in the performance of the noncoated membranes and the polyelectrolyte modified membranes was found during treatment of seawater in the non-continuous regime. A possible mechanism of the higher fouling resistance of the modified membranes has been discussed.

Keywords: contact angle, membrane fouling, polyelectrolytes, surface modification

Procedia PDF Downloads 251
344 Productivity of Grain Sorghum-Cowpea Intercropping System: Climate-Smart Approach

Authors: Mogale T. E., Ayisi K. K., Munjonji L., Kifle Y. G.

Abstract:

Grain sorghum and cowpea are important staple crops in many areas of South Africa, particularly the Limpopo Province. The two crops are produced under a wide range of unsustainable conventional methods, which reduces productivity in the long run. Climate-smart traditional methods such as intercropping can be adopted to ensure sustainable production of these important two crops in the province. A no-tillage field experiment was laid out in a randomised complete block design (RCBD) with four replications over two seasons in two distinct agro-ecological zones, Syferkuil and Ofcolacoin, the province to assess the productivity of sorghum-cowpea intercropped under two cowpea densities.LCi Ultra compact photosynthesis machine was used to collect photosynthetic rate data biweekly between 11h00 and 13h00 until physiological maturity. Biomass and grain yield of the component crops in binary and sole cultures were determined at harvest maturity from middle rows of 2.7 m2 area. The biomass was oven dried in the laboratory at 65oC till constant weight. To obtain grain yield, harvested sorghum heads and cowpea pods were threshed, cleaned, and weighed. Harvest index (HI) and land equivalent ratio (LER) of the two crops were calculated to assess intercrop productivity relative to sole cultures. Data was analysed using the statistical analysis software system (SAS) 9.4 version, followed by mean separation using the least significant difference method. The photosyntheticrate of sorghum-cowpea intercrop was influenced by cowpea density and sorghum cultivar. Photosynthetic rate under low density was higher compared to high density, but this was dependent on the growing conditions. Dry biomass accumulation, grain yield, and harvest index differed among the sorghum cultivars and cowpea in both binary and sole cultures at the two test locations during the 2018/19 and 2020/21 growing seasons. Cowpea grain and dry biomass yields werein excess of 60% under high density compared to low density in both binary and sole cultures. The results revealed that grain yield accumulation of sorghum cultivars was influenced by the density of the companion cowpea crop as well as the production season. For instant, at Syferkuil, Enforcer and Ns5511 accumulated high yield under low density, whereas, at Ofcolaco, the higher yield was recorded under high density. Generally, under low cowpea density, cultivar Enforcer produced relatively higher grain yield whereas, under higher density, Titan yield was superior. The partial and total LER varied with growing season and the treatments studied. The total LERs exceeded 1.0 at the two locations across seasons, ranging from 1.3 to 1.8. From the results, it can be concluded that resources were used more efficiently in sorghum-cowpea intercrop at both Syferkuil and Ofcolaco. Furthermore, intercropping system improved photosynthetic rate, grain yield, and dry matter accumulation of sorghum and cowpea depending on growing conditions and density of cowpea. Hence, the sorghum-cowpea intercropping system can be adopted as a climate-smart practice for sustainable production in the Limpopo province.

Keywords: cowpea, climate-smart, grain sorghum, intercropping

Procedia PDF Downloads 221
343 Preliminary Studies of Antibiofouling Properties in Wrinkled Hydrogel Surfaces

Authors: Mauricio A. Sarabia-Vallejos, Carmen M. Gonzalez-Henriquez, Adolfo Del Campo-Garcia, Aitzibier L. Cortajarena, Juan Rodriguez-Hernandez

Abstract:

In this study, it was explored the formation and the morphological differences between wrinkled hydrogel patterns obtained via generation of surface instabilities. The slight variations in the polymerization conditions produce important changes in the material composition and pattern structuration. The compounds were synthesized using three main components, i.e. an amphiphilic monomer, hydroxyethyl methacrylate (HEMA), a hydrophobic monomer, trifluoroethyl methacrylate (TFMA), and a hydrophilic crosslinking agent, poly(ethylene glycol) diacrylate (PEGDA). The first part of this study was related to the formation of wrinkled surfaces using only HEMA and PEGDA and varying the amount of water added in the reaction. The second part of this study involves the gradual insertion of TFMA into the hydrophilic reaction mixture. Interestingly, the manipulation of the chemical composition of this hydrogel affects both surface morphology and physicochemical characteristics of the patterns, inducing transitions from one particular type of structure (wrinkles or ripples) to different ones (creases, folds, and crumples). Contact angle measurements show that the insertion of TFMA produces a slight decrease in surface wettability of the samples, remaining however highly hydrophilic (contact angle below 45°). More interestingly, by using confocal Raman spectroscopy, important information about the wrinkle formation mechanism is obtained. The procedure involving two consecutive thermal and photopolymerization steps lead to a “pseudo” two-layer system. Thus, upon photopolymerization, the surface is crosslinked to a higher extent than the bulk and water evaporation drives the formation of wrinkled surfaces. Finally, cellular, and bacterial proliferation studies were performed to the samples, showing that the amount of TFMA included in each sample slightly affects the proliferation of both (bacteria and cells), but in the case of bacteria, the morphology of the sample also plays an important role, importantly reducing the bacterial proliferation.

Keywords: antibiofouling properties, hydrophobic/hydrophilic balance, morphologic characterization, wrinkled hydrogel patterns

Procedia PDF Downloads 162
342 Study of Variation of Winds Behavior on Micro Urban Environment with Use of Fuzzy Logic for Wind Power Generation: Case Study in the Cities of Arraial do Cabo and São Pedro da Aldeia, State of Rio de Janeiro, Brazil

Authors: Roberto Rosenhaim, Marcos Antonio Crus Moreira, Robson da Cunha, Gerson Gomes Cunha

Abstract:

This work provides details on the wind speed behavior within cities of Arraial do Cabo and São Pedro da Aldeia located in the Lakes Region of the State of Rio de Janeiro, Brazil. This region has one of the best potentials for wind power generation. In interurban layer, wind conditions are very complex and depend on physical geography, size and orientation of buildings and constructions around, population density, and land use. In the same context, the fundamental surface parameter that governs the production of flow turbulence in urban canyons is the surface roughness. Such factors can influence the potential for power generation from the wind within the cities. Moreover, the use of wind on a small scale is not fully utilized due to complexity of wind flow measurement inside the cities. It is difficult to accurately predict this type of resource. This study demonstrates how fuzzy logic can facilitate the assessment of the complexity of the wind potential inside the cities. It presents a decision support tool and its ability to deal with inaccurate information using linguistic variables created by the heuristic method. It relies on the already published studies about the variables that influence the wind speed in the urban environment. These variables were turned into the verbal expressions that are used in computer system, which facilitated the establishment of rules for fuzzy inference and integration with an application for smartphones used in the research. In the first part of the study, challenges of the sustainable development which are described are followed by incentive policies to the use of renewable energy in Brazil. The next chapter follows the study area characteristics and the concepts of fuzzy logic. Data were collected in field experiment by using qualitative and quantitative methods for assessment. As a result, a map of the various points is presented within the cities studied with its wind viability evaluated by a system of decision support using the method multivariate classification based on fuzzy logic.

Keywords: behavior of winds, wind power, fuzzy logic, sustainable development

Procedia PDF Downloads 293
341 A Spatial Perspective on the Metallized Combustion Aspect of Rockets

Authors: Chitresh Prasad, Arvind Ramesh, Aditya Virkar, Karan Dholkaria, Vinayak Malhotra

Abstract:

Solid Propellant Rocket is a rocket that utilises a combination of a solid Oxidizer and a solid Fuel. Success in Solid Rocket Motor design and development depends significantly on knowledge of burning rate behaviour of the selected solid propellant under all motor operating conditions and design limit conditions. Most Solid Motor Rockets consist of the Main Engine, along with multiple Boosters that provide an additional thrust to the space-bound vehicle. Though widely used, they have been eclipsed by Liquid Propellant Rockets, because of their better performance characteristics. The addition of a catalyst such as Iron Oxide, on the other hand, can drastically enhance the performance of a Solid Rocket. This scientific investigation tries to emulate the working of a Solid Rocket using Sparklers and Energized Candles, with a central Energized Candle acting as the Main Engine and surrounding Sparklers acting as the Booster. The Energized Candle is made of Paraffin Wax, with Magnesium filings embedded in it’s wick. The Sparkler is made up of 45% Barium Nitrate, 35% Iron, 9% Aluminium, 10% Dextrin and the remaining composition consists of Boric Acid. The Magnesium in the Energized Candle, and the combination of Iron and Aluminium in the Sparkler, act as catalysts and enhance the burn rates of both materials. This combustion of Metallized Propellants has an influence over the regression rate of the subject candle. The experimental parameters explored here are Separation Distance, Systematically varying Configuration and Layout Symmetry. The major performance parameter under observation is the Regression Rate of the Energized Candle. The rate of regression is significantly affected by the orientation and configuration of the sparklers, which usually act as heat sources for the energized candle. The Overall Efficiency of any engine is factorised by the thermal and propulsive efficiencies. Numerous efforts have been made to improve one or the other. This investigation focuses on the Orientation of Rocket Motor Design to maximize their Overall Efficiency. The primary objective is to analyse the Flame Spread Rate variations of the energized candle, which resembles the solid rocket propellant used in the first stage of rocket operation thereby affecting the Specific Impulse values in a Rocket, which in turn have a deciding impact on their Time of Flight. Another objective of this research venture is to determine the effectiveness of the key controlling parameters explored. This investigation also emulates the exhaust gas interactions of the Solid Rocket through concurrent ignition of the Energized Candle and Sparklers, and their behaviour is analysed. Modern space programmes intend to explore the universe outside our solar system. To accomplish these goals, it is necessary to design a launch vehicle which is capable of providing incessant propulsion along with better efficiency for vast durations. The main motivation of this study is to enhance Rocket performance and their Overall Efficiency through better designing and optimization techniques, which will play a crucial role in this human conquest for knowledge.

Keywords: design modifications, improving overall efficiency, metallized combustion, regression rate variations

Procedia PDF Downloads 178
340 Thermal-Mechanical Analysis of a Bridge Deck to Determine Residual Weld Stresses

Authors: Evy Van Puymbroeck, Wim Nagy, Ken Schotte, Heng Fang, Hans De Backer

Abstract:

The knowledge of residual stresses for welded bridge components is essential to determine the effect of the residual stresses on the fatigue life behavior. The residual stresses of an orthotropic bridge deck are determined by simulating the welding process with finite element modelling. The stiffener is placed on top of the deck plate before welding. A chained thermal-mechanical analysis is set up to determine the distribution of residual stresses for the bridge deck. First, a thermal analysis is used to determine the temperatures of the orthotropic deck for different time steps during the welding process. Twin wire submerged arc welding is used to construct the orthotropic plate. A double ellipsoidal volume heat source model is used to describe the heat flow through a material for a moving heat source. The heat input is used to determine the heat flux which is applied as a thermal load during the thermal analysis. The heat flux for each element is calculated for different time steps to simulate the passage of the welding torch with the considered welding speed. This results in a time dependent heat flux that is applied as a thermal loading. Thermal material behavior is specified by assigning the properties of the material in function of the high temperatures during welding. Isotropic hardening behavior is included in the model. The thermal analysis simulates the heat introduced in the two plates of the orthotropic deck and calculates the temperatures during the welding process. After the calculation of the temperatures introduced during the welding process in the thermal analysis, a subsequent mechanical analysis is performed. For the boundary conditions of the mechanical analysis, the actual welding conditions are considered. Before welding, the stiffener is connected to the deck plate by using tack welds. These tack welds are implemented in the model. The deck plate is allowed to expand freely in an upwards direction while it rests on a firm and flat surface. This behavior is modelled by using grounded springs. Furthermore, symmetry points and lines are used to prevent the model to move freely in other directions. In the thermal analysis, a mechanical material model is used. The calculated temperatures during the thermal analysis are introduced during the mechanical analysis as a time dependent load. The connection of the elements of the two plates in the fusion zone is realized with a glued connection which is activated when the welding temperature is reached. The mechanical analysis results in a distribution of the residual stresses. The distribution of the residual stresses of the orthotropic bridge deck is compared with results from literature. Literature proposes uniform tensile yield stresses in the weld while the finite element modelling showed tensile yield stresses at a short distance from the weld root or the weld toe. The chained thermal-mechanical analysis results in a distribution of residual weld stresses for an orthotropic bridge deck. In future research, the effect of these residual stresses on the fatigue life behavior of welded bridge components can be studied.

Keywords: finite element modelling, residual stresses, thermal-mechanical analysis, welding simulation

Procedia PDF Downloads 171
339 Temporal Estimation of Hydrodynamic Parameter Variability in Constructed Wetlands

Authors: Mohammad Moezzibadi, Isabelle Charpentier, Adrien Wanko, Robert Mosé

Abstract:

The calibration of hydrodynamic parameters for subsurface constructed wetlands (CWs) is a sensitive process since highly non-linear equations are involved in unsaturated flow modeling. CW systems are engineered systems designed to favour natural treatment processes involving wetland vegetation, soil, and their microbial flora. Their significant efficiency at reducing the ecological impact of urban runoff has been recently proved in the field. Numerical flow modeling in a vertical variably saturated CW is here carried out by implementing the Richards model by means of a mixed hybrid finite element method (MHFEM), particularly well adapted to the simulation of heterogeneous media, and the van Genuchten-Mualem parametrization. For validation purposes, MHFEM results were compared to those of HYDRUS (a software based on a finite element discretization). As van Genuchten-Mualem soil hydrodynamic parameters depend on water content, their estimation is subject to considerable experimental and numerical studies. In particular, the sensitivity analysis performed with respect to the van Genuchten-Mualem parameters reveals a predominant influence of the shape parameters α, n and the saturated conductivity of the filter on the piezometric heads, during saturation and desaturation. Modeling issues arise when the soil reaches oven-dry conditions. A particular attention should also be brought to boundary condition modeling (surface ponding or evaporation) to be able to tackle different sequences of rainfall-runoff events. For proper parameter identification, large field datasets would be needed. As these are usually not available, notably due to the randomness of the storm events, we thus propose a simple, robust and low-cost numerical method for the inverse modeling of the soil hydrodynamic properties. Among the methods, the variational data assimilation technique introduced by Le Dimet and Talagrand is applied. To that end, a variational data assimilation technique is implemented by applying automatic differentiation (AD) to augment computer codes with derivative computations. Note that very little effort is needed to obtain the differentiated code using the on-line Tapenade AD engine. Field data are collected for a three-layered CW located in Strasbourg (Alsace, France) at the water edge of the urban water stream Ostwaldergraben, during several months. Identification experiments are conducted by comparing measured and computed piezometric head by means of the least square objective function. The temporal variability of hydrodynamic parameter is then assessed and analyzed.

Keywords: automatic differentiation, constructed wetland, inverse method, mixed hybrid FEM, sensitivity analysis

Procedia PDF Downloads 163
338 The Hague Abduction Convention and the Egyptian Position: Strategizing for a Law Reform

Authors: Abdalla Ahmed Abdrabou Emam Eldeib

Abstract:

For more than a century, the Hague Conference has tackled issues in the most challenging areas of private international law, including family law. Its actions in the realm of international child abduction have been remarkable in two ways during the last two decades. First, on October 25, 1980, the Hague Convention on the Civil Aspects of International Child Abduction (the Convention) was promulgated as an unusually inventive and powerful tool. Second, the Convention is rapidly becoming more prominent in the development of international child law. By that time, overseas travel had grown more convenient, and more couples were marrying or travelling across national lines. At the same time, parental separation and divorce have increased, leading to an increase in international child custody battles. The convention they drafted avoids legal quagmires and addresses extra-legal issues well. It literally restores the kid to its place of usual residence by establishing that the youngster was unlawfully abducted from that position or, alternatively, was wrongfully kept abroad after an allowed visit. Legal custody of a child of a contested parent is usually followed by the child's abduction or unlawful relocation to another country by the non-custodial parent or other persons. If a child's custodial parent lives outside of Egypt, the youngster may be kidnapped and brought to Egypt. It's natural to ask what laws should apply and what legal norms should be followed while hearing individual instances. This study comprehensively evaluates and estimates the relevant Hague Child Abduction Convention and the current situation in Egypt and which law is applicable for child custody. In addition, this research emphasis, detail, and focus on the position of Cross-border parental child abductions in Egypt. Moreover, examine the Islamic law compared to the Hague Convention on Child Custody in detail, as well as mentioning the treatment of Islamic countries in this matter in general and Egypt's treatment of this matter in particular, as well as the criticism directed at Egypt regarding the application and implementation of child custody issues. The present research backs up this method by using non-doctrinal techniques, including surveys, interviews, and dialogues. An important objective of this research is to examine the factors that contribute to parental child abduction. In this case, family court attorneys and other interested parties serve as the target audience from whom data is collected. A survey questionnaire was developed and sent to the target population in order to collect data for future empirical testing to validate the identified critical factors on Parental Child Abduction. The main finding in this study is breaking the reservations of many Muslim countries to join the Hague Convention with regard to child custody., Likewise, clarify the problems of implementation in practice in cases of kidnapping a child from one of the parents and traveling with him outside the borders of the country. Finally, this study is to provide suggestions for reforming the current Egyptian Family Law to make it an effective and efficient for all dispute's resolution mechanism and the possibility of joining The Hague Convention.

Keywords: egyptian family law, Hague child abduction convention, child custody, cross-border parental child abductions in egypt

Procedia PDF Downloads 70
337 The Microstructure and Corrosion Behavior of High Entropy Metallic Layers Electrodeposited by Low and High-Temperature Methods

Authors: Zbigniew Szklarz, Aldona Garbacz-Klempka, Magdalena Bisztyga-Szklarz

Abstract:

Typical metallic alloys bases on one major alloying component, where the addition of other elements is intended to improve or modify certain properties, most of all the mechanical properties. However, in 1995 a new concept of metallic alloys was described and defined. High Entropy Alloys (HEA) contains at least five alloying elements in an amount from 5 to 20 at.%. A common feature this type of alloys is an absence of intermetallic phases, high homogeneity of the microstructure and unique chemical composition, what leads to obtaining materials with very high strength indicators, stable structures (also at high temperatures) and excellent corrosion resistance. Hence, HEA can be successfully used as a substitutes for typical metallic alloys in various applications where a sufficiently high properties are desirable. For fabricating HEA, a few ways are applied: 1/ from liquid phase i.e. casting (usually arc melting); 2/ from solid phase i.e. powder metallurgy (sintering methods preceded by mechanical synthesis) and 3/ from gas phase e.g. sputtering or 4/ other deposition methods like electrodeposition from liquids. Application of different production methods creates different microstructures of HEA, which can entail differences in their properties. The last two methods also allows to obtain coatings with HEA structures, hereinafter referred to as High Entropy Films (HEF). With reference to above, the crucial aim of this work was the optimization of the manufacturing process of the multi-component metallic layers (HEF) by the low- and high temperature electrochemical deposition ( ED). The low-temperature deposition process was crried out at ambient or elevated temperature (up to 100 ᵒC) in organic electrolyte. The high-temperature electrodeposition (several hundred Celcius degrees), in turn, allowed to form the HEF layer by electrochemical reduction of metals from molten salts. The basic chemical composition of the coatings was CoCrFeMnNi (known as Cantor’s alloy). However, it was modified by other, selected elements like Al or Cu. The optimization of the parameters that allow to obtain as far as it possible homogeneous and equimolar composition of HEF is the main result of presented studies. In order to analyse and compare the microstructure, SEM/EBSD, TEM and XRD techniques were employed. Morover, the determination of corrosion resistance of the CoCrFeMnNi(Cu or Al) layers in selected electrolytes (i.e. organic and non-organic liquids) was no less important than the above mentioned objectives.

Keywords: high entropy alloys, electrodeposition, corrosion behavior, microstructure

Procedia PDF Downloads 80
336 Evaluation of NoSQL in the Energy Marketplace with GraphQL Optimization

Authors: Michael Howard

Abstract:

The growing popularity of electric vehicles in the United States requires an ever-expanding infrastructure of commercial DC fast charging stations. The U.S. Department of Energy estimates 33,355 publicly available DC fast charging stations as of September 2023. In 2017, 115,370 gasoline stations were operating in the United States, much more ubiquitous than DC fast chargers. Range anxiety is an important impediment to the adoption of electric vehicles and is even more relevant in underserved regions in the country. The peer-to-peer energy marketplace helps fill the demand by allowing private home and small business owners to rent their 240 Volt, level-2 charging facilities. The existing, publicly accessible outlets are wrapped with a Cloud-connected microcontroller managing security and charging sessions. These microcontrollers act as Edge devices communicating with a Cloud message broker, while both buyer and seller users interact with the framework via a web-based user interface. The database storage used by the marketplace framework is a key component in both the cost of development and the performance that contributes to the user experience. A traditional storage solution is the SQL database. The architecture and query language have been in existence since the 1970s and are well understood and documented. The Structured Query Language supported by the query engine provides fine granularity with user query conditions. However, difficulty in scaling across multiple nodes and cost of its server-based compute have resulted in a trend in the last 20 years towards other NoSQL, serverless approaches. In this study, we evaluate the NoSQL vs. SQL solutions through a comparison of Google Cloud Firestore and Cloud SQL MySQL offerings. The comparison pits Google's serverless, document-model, non-relational, NoSQL against the server-base, table-model, relational, SQL service. The evaluation is based on query latency, flexibility/scalability, and cost criteria. Through benchmarking and analysis of the architecture, we determine whether Firestore can support the energy marketplace storage needs and if the introduction of a GraphQL middleware layer can overcome its deficiencies.

Keywords: non-relational, relational, MySQL, mitigate, Firestore, SQL, NoSQL, serverless, database, GraphQL

Procedia PDF Downloads 62
335 Re-identification Risk and Mitigation in Federated Learning: Human Activity Recognition Use Case

Authors: Besma Khalfoun

Abstract:

In many current Human Activity Recognition (HAR) applications, users' data is frequently shared and centrally stored by third parties, posing a significant privacy risk. This practice makes these entities attractive targets for extracting sensitive information about users, including their identity, health status, and location, thereby directly violating users' privacy. To tackle the issue of centralized data storage, a relatively recent paradigm known as federated learning has emerged. In this approach, users' raw data remains on their smartphones, where they train the HAR model locally. However, users still share updates of their local models originating from raw data. These updates are vulnerable to several attacks designed to extract sensitive information, such as determining whether a data sample is used in the training process, recovering the training data with inversion attacks, or inferring a specific attribute or property from the training data. In this paper, we first introduce PUR-Attack, a parameter-based user re-identification attack developed for HAR applications within a federated learning setting. It involves associating anonymous model updates (i.e., local models' weights or parameters) with the originating user's identity using background knowledge. PUR-Attack relies on a simple yet effective machine learning classifier and produces promising results. Specifically, we have found that by considering the weights of a given layer in a HAR model, we can uniquely re-identify users with an attack success rate of almost 100%. This result holds when considering a small attack training set and various data splitting strategies in the HAR model training. Thus, it is crucial to investigate protection methods to mitigate this privacy threat. Along this path, we propose SAFER, a privacy-preserving mechanism based on adaptive local differential privacy. Before sharing the model updates with the FL server, SAFER adds the optimal noise based on the re-identification risk assessment. Our approach can achieve a promising tradeoff between privacy, in terms of reducing re-identification risk, and utility, in terms of maintaining acceptable accuracy for the HAR model.

Keywords: federated learning, privacy risk assessment, re-identification risk, privacy preserving mechanisms, local differential privacy, human activity recognition

Procedia PDF Downloads 11
334 Generating 3D Battery Cathode Microstructures using Gaussian Mixture Models and Pix2Pix

Authors: Wesley Teskey, Vedran Glavas, Julian Wegener

Abstract:

Generating battery cathode microstructures is an important area of research, given the proliferation of the use of automotive batteries. Currently, finite element analysis (FEA) is often used for simulations of battery cathode microstructures before physical batteries can be manufactured and tested to verify the simulation results. Unfortunately, a key drawback of using FEA is that this method of simulation is very slow in terms of computational runtime. Generative AI offers the key advantage of speed when compared to FEA, and because of this, generative AI is capable of evaluating very large numbers of candidate microstructures. Given AI generated candidate microstructures, a subset of the promising microstructures can be selected for further validation using FEA. Leveraging the speed advantage of AI allows for a better final microstructural selection because high speed allows for the evaluation of many more candidate microstructures. For the approach presented, battery cathode 3D candidate microstructures are generated using Gaussian Mixture Models (GMMs) and pix2pix. This approach first uses GMMs to generate a population of spheres (representing the “active material” of the cathode). Once spheres have been sampled from the GMM, they are placed within a microstructure. Subsequently, the pix2pix sweeps over the 3D microstructure (iteratively) slice by slice and adds details to the microstructure to determine what portions of the microstructure will become electrolyte and what part of the microstructure will become binder. In this manner, each subsequent slice of the microstructure is evaluated using pix2pix, where the inputs into pix2pix are the previously processed layers of the microstructure. By feeding into pix2pix previously fully processed layers of the microstructure, pix2pix can be used to ensure candidate microstructures represent a realistic physical reality. More specifically, in order for the microstructure to represent a realistic physical reality, the locations of electrolyte and binder in each layer of the microstructure must reasonably match the locations of electrolyte and binder in previous layers to ensure geometric continuity. Using the above outlined approach, a 10x to 100x speed increase was possible when generating candidate microstructures using AI when compared to using a FEA only approach for this task. A key metric for evaluating microstructures was the battery specific power value that the microstructures would be able to produce. The best generative AI result obtained was a 12% increase in specific power for a candidate microstructure when compared to what a FEA only approach was capable of producing. This 12% increase in specific power was verified by FEA simulation.

Keywords: finite element analysis, gaussian mixture models, generative design, Pix2Pix, structural design

Procedia PDF Downloads 107
333 In-situ and Laboratory Characterization of Fiji Lateritic Soils

Authors: Faijal Ali, Darga Kumar N., Ravikant Singh, Rajnil Lal

Abstract:

Fiji has three major landforms such as plains, low mountains, and hills. The low land soils are formed on beach sand. Fiji soils contain high concentration of iron (III), aluminum oxides and hydroxides. The soil possesses reddish or yellowish colour. The characterization of lateritic soils collected from different locations along the national highway in Viti Levu, Fiji Islands. The research has been carried out mainly to understand the physical and strength properties to assess their suitability for the highway and building construction. In this paper, the field tests such as dynamic cone penetrometer test, field vane shear, field density and laboratory tests such as unconfined compression stress, compaction, grain size analysis and Atterberg limits are conducted. The test results are analyzed and presented. From the results, it is revealed that the soils are having more percentage of silt and clay which is more than 80% and 5 to 15% of fine to medium sand is noticed. The dynamic cone penetrometer results up to 3m depth had similar penetration resistance. For the first 1m depth, the rate of penetration is found 300mm per 3 to 4 blows. In all the sites it is further noticed that the rate of penetration at depths beyond 1.5 m is decreasing for the same number of blows as compared to the top soil. From the penetration resistance measured through dynamic cone penetrometer test, the California bearing ratio and allowable bearing capacities are 4 to 5% and 50 to 100 kPa for the top 1m layer and below 1m these values are increasing. The California bearing ratio of these soils for below 1m depth is in the order of 10% to 20%. The safe bearing capacity of these soils below 1m and up to 3m depth is varying from 150 kPa to 250 kPa. The field vane shear was measured within a depth of 1m from the surface and the values were almost similar varying from 60 kPa to 120 kPa. The liquid limit and plastic limits of these soils are in the range of 40 to 60% and 20 to 25%. Overall it is found that the top 1m soil along the national highway in majority places possess a soft to medium stiff behavior with low to medium bearing capacity as well low California bearing ratio values. It is recommended to ascertain these soils behavior in terms of geotechnical parameters before taking up any construction activity.

Keywords: California bearing ratio, dynamic cone penetrometer test, field vane shear, unconfined compression stress.

Procedia PDF Downloads 186
332 The Effect of Teachers' Personal Values on the Perceptions of the Effective Principal and Student in School

Authors: Alexander Zibenberg, Rima’a Da’As

Abstract:

According to the author’s knowledge, individuals are naturally inclined to classify people as leaders and followers. Individuals utilize cognitive structures or prototypes specifying the traits and abilities that characterize the effective leader (implicit leadership theories) and effective follower in an organization (implicit followership theories). Thus, the present study offers insights into understanding how teachers' personal values (self-enhancement and self-transcendence) explain the preference for styles of effective leader (i.e., principal) and assumptions about the traits and behaviors that characterize effective followers (i.e., student). Beyond the direct effect on perceptions of effective types of leader and follower, the present study argues that values may also interact with organizational and personal contexts in influencing perceptions. Thus authors suggest that teachers' managerial position may moderate the relationships between personal values and perception of the effective leader and follower. Specifically, two key questions are addressed in the present research: (1) Is there a relationship between personal values and perceptions of the effective leader and effective follower? and (2) Are these relationships stable or could they change across different contexts? Two hundred fifty-five Israeli teachers participated in this study, completing questionnaires – about the effective student and effective principal. Results of structural equations modeling (SEM) with maximum likelihood estimation showed: first: the model fit the data well. Second: researchers found a positive relationship between self-enhancement and anti-prototype of the effective principal and anti-prototype of the effective student. The relationship between self-transcendence value and both perceptions were found significant as well. Self-transcendence positively related to the way the teacher perceives the prototype of the effective principal and effective student. Besides, authors found that teachers' managerial position moderates these relationships. The article contributes to the literature both on perceptions and on personal values. Although several earlier studies explored issues of implicit leadership theories and implicit followership theories, personality characteristics (values) have garnered less attention in this matter. This study shows that personal values which are deeply rooted, abstract motivations that guide justify or explain attitudes, norms, opinions and actions explain differences in perception of the effective leader and follower. The results advance the theoretical understanding of the relationship between personal values and individuals’ perceptions in organizations. An additional contribution of this study is the application of the teacher's managerial position to explain a potential boundary condition of the translation of personal values into outcomes. The findings suggest that through the management process in the organization, teachers acquire knowledge and skills which augment their ability (beyond their personal values) to predict perceptions of ideal types of principal and student. The study elucidates the unique role of personal values in understanding an organizational thinking in organization. It seems that personal values might explain the differences in individual preferences of the organizational paradigm (mechanistic vs organic).

Keywords: implicit leadership theories, implicit followership theories, organizational paradigms, personal values

Procedia PDF Downloads 157
331 Controlled Doping of Graphene Monolayer

Authors: Vedanki Khandenwal, Pawan Srivastava, Kartick Tarafder, Subhasis Ghosh

Abstract:

We present here the experimental realization of controlled doping of graphene monolayers through charge transfer by trapping selected organic molecules between the graphene layer and underlying substrates. This charge transfer between graphene and trapped molecule leads to controlled n-type or p-type doping in monolayer graphene (MLG), depending on whether the trapped molecule acts as an electron donor or an electron acceptor. Doping controllability has been validated by a shift in corresponding Raman peak positions and a shift in Dirac points. In the transfer characteristics of field effect transistors, a significant shift of Dirac point towards positive or negative gate voltage region provides the signature of p-type or n-type doping of graphene, respectively, as a result of the charge transfer between graphene and the organic molecules trapped within it. In order to facilitate the charge transfer interaction, it is crucial for the trapped molecules to be situated in close proximity to the graphene surface, as demonstrated by findings in Raman and infrared spectroscopies. However, the mechanism responsible for this charge transfer interaction has remained unclear at the microscopic level. Generally, it is accepted that the dipole moment of adsorbed molecules plays a crucial role in determining the charge-transfer interaction between molecules and graphene. However, our findings clearly illustrate that the doping effect primarily depends on the reactivity of the constituent atoms in the adsorbed molecules rather than just their dipole moment. This has been illustrated by trapping various molecules at the graphene−substrate interface. Dopant molecules such as acetone (containing highly reactive oxygen atoms) promote adsorption across the entire graphene surface. In contrast, molecules with less reactive atoms, such as acetonitrile, tend to adsorb at the edges due to the presence of reactive dangling bonds. In the case of low-dipole moment molecules like toluene, there is a lack of substantial adsorption anywhere on the graphene surface. Observation of (i) the emergence of the Raman D peak exclusively at the edges for trapped molecules without reactive atoms and throughout the entire basal plane for those with reactive atoms, and (ii) variations in the density of attached molecules (with and without reactive atoms) to graphene with their respective dipole moments provides compelling evidence to support our claim. Additionally, these observations were supported by first principle density functional calculations.

Keywords: graphene, doping, charge transfer, liquid phase exfoliation

Procedia PDF Downloads 65
330 Design of the Ice Rink of the Future

Authors: Carine Muster, Prina Howald Erika

Abstract:

Today's ice rinks are important energy consumers for the production and maintenance of ice. At the same time, users demand that the other rooms should be tempered or heated. The building complex must equally provide cooled and heated zones, which does not translate as carbon-zero ice rinks. The study provides an analysis of how the civil engineering sector can significantly impact minimizing greenhouse gas emissions and optimizing synergies across an entire ice rink complex. The analysis focused on three distinct aspects: the layout, including the volumetric layout of the premises present in an ice rink; the materials chosen that can potentially use the most ecological structural approach; and the construction methods based on innovative solutions to reduce carbon footprint. The first aspect shows that the organization of the interior volumes and defining the shape of the rink play a significant role. Its layout makes the use and operation of the premises as efficient as possible, thanks to the differentiation between heated and cooled volumes while optimising heat loss between the different rooms. The sprayed concrete method, which is still little known, proves that it is possible to achieve the strength of traditional concrete for the structural aspect of the load-bearing and non-load-bearing walls of the ice rink by using materials excavated from the construction site and providing a more ecological and sustainable solution. The installation of an empty sanitary space underneath the ice floor, making it independent of the rest of the structure, provides a natural insulating layer, preventing the transfer of cold to the rest of the structure and reducing energy losses. The addition of active pipes as part of the foundation of the ice floor, coupled with a suitable system, gives warmth in the winter and storage in the summer; this is all possible thanks to the natural heat in the ground. In conclusion, this study provides construction recommendations for future ice rinks with a significantly reduced energy demand, using some simple preliminary design concepts. By optimizing the layout, materials, and construction methods of ice rinks, the civil engineering sector can play a key role in reducing greenhouse gas emissions and promoting sustainability.

Keywords: climate change, energy optimization, green building, sustainability

Procedia PDF Downloads 67
329 Lake Water Surface Variations and Its Influencing Factors in Tibetan Plateau in Recent 10 Years

Authors: Shanlong Lu, Jiming Jin, Xiaochun Wang

Abstract:

The Tibetan Plateau has the largest number of inland lakes with the highest elevation on the planet. These massive and large lakes are mostly in natural state and are less affected by human activities. Their shrinking or expansion can truly reflect regional climate and environmental changes and are sensitive indicators of global climate change. However, due to the sparsely populated nature of the plateau and the poor natural conditions, it is difficult to effectively obtain the change data of the lake, which has affected people's understanding of the temporal and spatial processes of lake water changes and their influencing factors. By using the MODIS (Moderate Resolution Imaging Spectroradiometer) MOD09Q1 surface reflectance images as basic data, this study produced the 8-day lake water surface data set of the Tibetan Plateau from 2000 to 2012 at 250 m spatial resolution, with a lake water surface extraction method of combined with lake water surface boundary buffer analyzing and lake by lake segmentation threshold determining. Then based on the dataset, the lake water surface variations and their influencing factors were analyzed, by using 4 typical natural geographical zones of Eastern Qinghai and Qilian, Southern Qinghai, Qiangtang, and Southern Tibet, and the watersheds of the top 10 lakes of Qinghai, Siling Co, Namco, Zhari NamCo, Tangra Yumco, Ngoring, UlanUla, Yamdrok Tso, Har and Gyaring as the analysis units. The accuracy analysis indicate that compared with water surface data of the 134 sample lakes extracted from the 30 m Landsat TM (Thematic Mapper ) images, the average overall accuracy of the lake water surface data set is 91.81% with average commission and omission error of 3.26% and 5.38%; the results also show strong linear (R2=0.9991) correlation with the global MODIS water mask dataset with overall accuracy of 86.30%; and the lake area difference between the Second National Lake Survey and this study is only 4.74%, respectively. This study provides reliable dataset for the lake change research of the plateau in the recent decade. The change trends and influencing factors analysis indicate that the total water surface area of lakes in the plateau showed overall increases, but only lakes with areas larger than 10 km2 had statistically significant increases. Furthermore, lakes with area larger than 100 km2 experienced an abrupt change in 2005. In addition, the annual average precipitation of Southern Tibet and Southern Qinghai experienced significant increasing and decreasing trends, and corresponding abrupt changes in 2004 and 2006, respectively. The annual average temperature of Southern Tibet and Qiangtang showed a significant increasing trend with an abrupt change in 2004. The major reason for the lake water surface variation in Eastern Qinghai and Qilian, Southern Qinghai and Southern Tibet is the changes of precipitation, and that for Qiangtang is the temperature variations.

Keywords: lake water surface variation, MODIS MOD09Q1, remote sensing, Tibetan Plateau

Procedia PDF Downloads 231
328 Wetting Characterization of High Aspect Ratio Nanostructures by Gigahertz Acoustic Reflectometry

Authors: C. Virgilio, J. Carlier, P. Campistron, M. Toubal, P. Garnier, L. Broussous, V. Thomy, B. Nongaillard

Abstract:

Wetting efficiency of microstructures or nanostructures patterned on Si wafers is a real challenge in integrated circuits manufacturing. In fact, bad or non-uniform wetting during wet processes limits chemical reactions and can lead to non-complete etching or cleaning inside the patterns and device defectivity. This issue is more and more important with the transistors size shrinkage and concerns mainly high aspect ratio structures. Deep Trench Isolation (DTI) structures enabling pixels’ isolation in imaging devices are subject to this phenomenon. While low-frequency acoustic reflectometry principle is a well-known method for Non Destructive Test applications, we have recently shown that it is also well suited for nanostructures wetting characterization in a higher frequency range. In this paper, we present a high-frequency acoustic reflectometry characterization of DTI wetting through a confrontation of both experimental and modeling results. The acoustic method proposed is based on the evaluation of the reflection of a longitudinal acoustic wave generated by a 100 µm diameter ZnO piezoelectric transducer sputtered on the silicon wafer backside using MEMS technologies. The transducers have been fabricated to work at 5 GHz corresponding to a wavelength of 1.7 µm in silicon. The DTI studied structures, manufactured on the wafer frontside, are crossing trenches of 200 nm wide and 4 µm deep (aspect ratio of 20) etched into a Si wafer frontside. In that case, the acoustic signal reflection occurs at the bottom and at the top of the DTI enabling its characterization by monitoring the electrical reflection coefficient of the transducer. A Finite Difference Time Domain (FDTD) model has been developed to predict the behavior of the emitted wave. The model shows that the separation of the reflected echoes (top and bottom of the DTI) from different acoustic modes is possible at 5 Ghz. A good correspondence between experimental and theoretical signals is observed. The model enables the identification of the different acoustic modes. The evaluation of DTI wetting is then performed by focusing on the first reflected echo obtained through the reflection at Si bottom interface, where wetting efficiency is crucial. The reflection coefficient is measured with different water / ethanol mixtures (tunable surface tension) deposited on the wafer frontside. Two cases are studied: with and without PFTS hydrophobic treatment. In the untreated surface case, acoustic reflection coefficient values with water show that liquid imbibition is partial. In the treated surface case, the acoustic reflection is total with water (no liquid in DTI). The impalement of the liquid occurs for a specific surface tension but it is still partial for pure ethanol. DTI bottom shape and local pattern collapse of the trenches can explain these incomplete wetting phenomena. This high-frequency acoustic method sensitivity coupled with a FDTD propagative model thus enables the local determination of the wetting state of a liquid on real structures. Partial wetting states for non-hydrophobic surfaces or low surface tension liquids are then detectable with this method.

Keywords: wetting, acoustic reflectometry, gigahertz, semiconductor

Procedia PDF Downloads 327
327 Biochemical Effects of Low Dose Dimethyl Sulfoxide on HepG2 Liver Cancer Cell Line

Authors: Esra Sengul, R. G. Aktas, M. E. Sitar, H. Isan

Abstract:

Hepatocellular carcinoma (HCC) is a hepatocellular tumor commonly found on the surface of the chronic liver. HepG2 is the most commonly used cell type in HCC studies. The main proteins remaining in the blood serum after separation of plasma fibrinogen are albumin and globulin. The fact that the albumin showed hepatocellular damage and reflect the synthesis capacity of the liver was the main reason for our use. Alpha-Fetoprotein (AFP) is an albumin-like structural embryonic globulin found in the embryonic cortex, cord blood, and fetal liver. It has been used as a marker in the follow-up of tumor growth in various malign tumors and in the efficacy of surgical-medical treatments, so it is a good protein to look at with albumins. We have seen the morphological changes of dimethyl sulfoxide (DMSO) on HepG2 and decided to investigate its biochemical effects. We examined the effects of DMSO, which is used in cell cultures, on albumin, AFP and total protein at low doses. Material Method: Cell Culture: Medium was prepared in cell culture using Dulbecco's Modified Eagle Media (DMEM), Fetal Bovine Serum Dulbecco's (FBS), Phosphate Buffered Saline and trypsin maintained at -20 ° C. Fixation of Cells: HepG2 cells, which have been appropriately developed at the end of the first week, were fixed with acetone. We stored our cells in PBS at + 4 ° C until the fixation was completed. Area Calculation: The areas of the cells are calculated in the ImageJ (IJ). Microscope examination: The examination was performed with a Zeiss Inverted Microscope. Daytime photographs were taken at 40x, 100x 200x and 400x. Biochemical Tests: Protein (Total): Serum sample was analyzed by a spectrophotometric method in autoanalyzer. Albumin: Serum sample was analyzed by a spectrophotometric method in autoanalyzer. Alpha-fetoprotein: Serum sample was analyzed by ECLIA method. Results: When liver cancer cells were cultured in medium with 1% DMSO for 4 weeks, a significant difference was observed when compared with the control group. As a result, we have seen that DMSO can be used as an important agent in the treatment of liver cancer. Cell areas were reduced in the DMSO group compared to the control group and the confluency ratio increased. The ability to form spheroids was also significantly higher in the DMSO group. Alpha-fetoprotein was lower than the values of an ordinary liver cancer patient and the total protein amount increased to the reference range of the normal individual. Because the albumin sample was below the specimen value, the numerical results could not be obtained on biochemical examinations. We interpret all these results as making DMSO a caretaking aid. Since each one was not enough alone we used 3 parameters and the results were positive when we refer to the values of a normal healthy individual in parallel. We hope to extend the study further by adding new parameters and genetic analyzes, by increasing the number of samples, and by using DMSO as an adjunct agent in the treatment of liver cancer.

Keywords: hepatocellular carcinoma, HepG2, dimethyl sulfoxide, cell culture, ELISA

Procedia PDF Downloads 135
326 An Exploration of Special Education Teachers’ Practices in a Preschool Intellectual Disability Centre in Saudi Arabia

Authors: Faris Algahtani

Abstract:

Background: In Saudi Arabia, it is essential to know what practices are employed and considered effective by special education teachers working with preschool children with intellectual disabilities, as a prerequisite for identifying areas for improvement. Preschool provision for these children is expanding through a network of Intellectual Disability Centres while, in primary schools, a policy of inclusion is pursued and, in mainstream preschools, pilots have been aimed at enhancing learning in readiness for primary schooling. This potentially widens the attainment gap between preschool children with and without intellectual disabilities, and influences the scope for improvement. Goal: The aim of the study was to explore special education teachers’ practices and perceived perceptions of those practices for preschool children with intellectual disabilities in Saudi Arabia Method: A qualitative interpretive approach was adopted in order to gain a detailed understanding of how special education teachers in an IDC operate in the classroom. Fifteen semi-structured interviews were conducted with experienced and qualified teachers. Data were analysed using thematic analysis, based on themes identified from the literature review together with new themes emerging from the data. Findings: American methods strongly influenced teaching practices, in particular TEACCH (Treatment and Education of Autistic and Communication related handicapped Children), which emphasises structure, schedules and specific methods of teaching tasks and skills; and ABA (Applied Behaviour Analysis), which aims to improve behaviours and skills by concentrating on detailed breakdown and teaching of task components and rewarding desired behaviours with positive reinforcement. The Islamic concept of education strongly influenced which teaching techniques were used and considered effective, and how they were applied. Tensions were identified between the Islamic approach to disability, which accepts differences between human beings as created by Allah in order for people to learn to help and love each other, and the continuing stigmatisation of disability in many Arabic cultures, which means that parents who bring their children to an IDC often hope and expect that their children will be ‘cured’. Teaching methods were geared to reducing behavioural problems and social deficits rather than to developing the potential of the individual child, with some teachers recognizing the child’s need for greater freedom. Relationships with parents could in many instances be improved. Teachers considered both initial teacher education and professional development to be inadequate for their needs and the needs of the children they teach. This can be partly attributed to the separation of training and development of special education teachers from that of general teachers. Conclusion: Based on the findings, teachers’ practices could be improved by the inclusion of general teaching strategies, parent-teacher relationships and practical teaching experience in both initial teacher education and professional development. Coaching and mentoring support from carefully chosen special education teachers could assist the process, as could the presence of a second teacher or teaching assistant in the classroom.

Keywords: special education, intellectual disabilities, early intervention , early childhood

Procedia PDF Downloads 137
325 Preparation of Silver and Silver-Gold, Universal and Repeatable, Surface Enhanced Raman Spectroscopy Platforms from SERSitive

Authors: Pawel Albrycht, Monika Ksiezopolska-Gocalska, Robert Holyst

Abstract:

Surface Enhanced Raman Spectroscopy (SERS) is a technique of growing importance not only in purely scientific research related to analytical chemistry. It finds more and more applications in broadly understood testing - medical, forensic, pharmaceutical, food - and everywhere works perfectly, on one condition that SERS substrates used for testing give adequate enhancement, repeatability, and homogeneity of SERS signal. This is a problem that has existed since the invention of this technique. Some laboratories use as SERS amplifiers colloids with silver or gold nanoparticles, others form rough silver or gold surfaces, but results are generally either weak or unrepeatable. Furthermore, these structures are very often highly specific - they amplify the signal only of a small group of compounds. It means that they work with some kinds of analytes but only with those which were used at a developer’s laboratory. When it comes to research on different compounds, completely new SERS 'substrates' are required. That underlay our decision to develop universal substrates for the SERS spectroscopy. Generally, each compound has different affinity for both silver and gold, which have the best SERS properties, and that's what depends on what signal we get in the SERS spectrum. Our task was to create the platform that gives a characteristic 'fingerprint' of the largest number of compounds with very high repeatability - even at the expense of the intensity of the enhancement factor (EF) (possibility to repeat research results is of the uttermost importance). As specified above SERS substrates are offered by SERSitive company. Applied method is based on cyclic potentiodynamic electrodeposition of silver or silver-gold nanoparticles on the conductive surface of ITO-coated glass at controlled temperature of the reaction solution. Silver nanoparticles are supplied in the form of silver nitrate (AgNO₃, 10 mM), gold nanoparticles are derived from tetrachloroauric acid (10 mM) while sodium sulfite (Na₂O₃, 5 mM) is used as a reductor. To limit and standardize the size of the SERS surface on which nanoparticles are deposited, photolithography is used. We secure the desired ITO-coated glass surface, and then etch the unprotected ITO layer which prevents nanoparticles from settling at these sites. On the prepared surface, we carry out the process described above, obtaining SERS surface with nanoparticles of sizes 50-400 nm. The SERSitive platforms present highly sensitivity (EF = 10⁵-10⁶), homogeneity and repeatability (70-80%).

Keywords: electrodeposition, nanoparticles, Raman spectroscopy, SERS, SERSitive, SERS platforms, SERS substrates

Procedia PDF Downloads 155
324 An Evolutionary Approach for Automated Optimization and Design of Vivaldi Antennas

Authors: Sahithi Yarlagadda

Abstract:

The design of antenna is constrained by mathematical and geometrical parameters. Though there are diverse antenna structures with wide range of feeds yet, there are many geometries to be tried, which cannot be customized into predefined computational methods. The antenna design and optimization qualify to apply evolutionary algorithmic approach since the antenna parameters weights dependent on geometric characteristics directly. The evolutionary algorithm can be explained simply for a given quality function to be maximized. We can randomly create a set of candidate solutions, elements of the function's domain, and apply the quality function as an abstract fitness measure. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and permutation to them. In conventional approach, the quality function is unaltered for any iteration. But the antenna parameters and geometries are wide to fit into single function. So, the weight coefficients are obtained for all possible antenna electrical parameters and geometries; the variation is learnt by mining the data obtained for an optimized algorithm. The weight and covariant coefficients of corresponding parameters are logged for learning and future use as datasets. This paper drafts an approach to obtain the requirements to study and methodize the evolutionary approach to automated antenna design for our past work on Vivaldi antenna as test candidate. The antenna parameters like gain, directivity, etc. are directly caged by geometries, materials, and dimensions. The design equations are to be noted here and valuated for all possible conditions to get maxima and minima for given frequency band. The boundary conditions are thus obtained prior to implementation, easing the optimization. The implementation mainly aimed to study the practical computational, processing, and design complexities that incur while simulations. HFSS is chosen for simulations and results. MATLAB is used to generate the computations, combinations, and data logging. MATLAB is also used to apply machine learning algorithms and plotting the data to design the algorithm. The number of combinations is to be tested manually, so HFSS API is used to call HFSS functions from MATLAB itself. MATLAB parallel processing tool box is used to run multiple simulations in parallel. The aim is to develop an add-in to antenna design software like HFSS, CSTor, a standalone application to optimize pre-identified common parameters of wide range of antennas available. In this paper, we have used MATLAB to calculate Vivaldi antenna parameters like slot line characteristic impedance, impedance of stripline, slot line width, flare aperture size, dielectric and K means, and Hamming window are applied to obtain the best test parameters. HFSS API is used to calculate the radiation, bandwidth, directivity, and efficiency, and data is logged for applying the Evolutionary genetic algorithm in MATLAB. The paper demonstrates the computational weights and Machine Learning approach for automated antenna optimizing for Vivaldi antenna.

Keywords: machine learning, Vivaldi, evolutionary algorithm, genetic algorithm

Procedia PDF Downloads 109
323 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models

Authors: V. Mantey, N. Findlay, I. Maddox

Abstract:

The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.

Keywords: building detection, disaster relief, mask-RCNN, satellite mapping

Procedia PDF Downloads 169
322 The Seller’s Sense: Buying-Selling Perspective Affects the Sensitivity to Expected-Value Differences

Authors: Taher Abofol, Eldad Yechiam, Thorsten Pachur

Abstract:

In four studies, we examined whether seller and buyers differ not only in subjective price levels for objects (i.e., the endowment effect) but also in their relative accuracy given objects varying in expected value. If, as has been proposed, sellers stand to accrue a more substantial loss than buyers do, then their pricing decisions should be more sensitive to expected-value differences between objects. This is implied by loss aversion due to the steeper slope of prospect theory’s value function for losses than for gains, as well as by loss attention account, which posits that losses increase the attention invested in a task. Both accounts suggest that losses increased sensitivity to relative values of different objects, which should result in better alignment of pricing decisions to the objective value of objects on the part of sellers. Under loss attention, this characteristic should only emerge under certain boundary conditions. In Study 1 a published dataset was reanalyzed, in which 152 participants indicated buying or selling prices for monetary lotteries with different expected values. Relative EV sensitivity was calculated for participants as the Spearman rank correlation between their pricing decisions for each of the lotteries and the lotteries' expected values. An ANOVA revealed a main effect of perspective (sellers versus buyers), F(1,150) = 85.3, p < .0001 with greater EV sensitivity for sellers. Study 2 examined the prediction (implied by loss attention) that the positive effect of losses on performance emerges particularly under conditions of time constraints. A published dataset was reanalyzed, where 84 participants were asked to provide selling and buying prices for monetary lotteries in three deliberations time conditions (5, 10, 15 seconds). As in Study 1, an ANOVA revealed greater EV sensitivity for sellers than for buyers, F(1,82) = 9.34, p = .003. Importantly, there was also an interaction of perspective by deliberation time. Post-hoc tests revealed that there were main effects of perspective both in the condition with 5s deliberation time, and in the condition with 10s deliberation time, but not in the 15s condition. Thus, sellers’ EV-sensitivity advantage disappeared with extended deliberation. Study 3 replicated the design of study 1 but administered the task three times to test if the effect decays with repeated presentation. The results showed that the difference between buyers and sellers’ EV sensitivity was replicated in repeated task presentations. Study 4 examined the loss attention prediction that EV-sensitivity differences can be eliminated by manipulations that reduce the differential attention investment of sellers and buyers. This was carried out by randomly mixing selling and buying trials for each participant. The results revealed no differences in EV sensitivity between selling and buying trials. The pattern of results is consistent with an attentional resource-based account of the differences between sellers and buyers. Thus, asking people to price, an object from a seller's perspective rather than the buyer's improves the relative accuracy of pricing decisions; subtle changes in the framing of one’s perspective in a trading negotiation may improve price accuracy.

Keywords: decision making, endowment effect, pricing, loss aversion, loss attention

Procedia PDF Downloads 344
321 Experimental and Modelling Performances of a Sustainable Integrated System of Conditioning for Bee-Pollen

Authors: Andrés Durán, Brian Castellanos, Marta Quicazán, Carlos Zuluaga-Domínguez

Abstract:

Bee-pollen is an apicultural-derived food product, with a growing appreciation among consumers given the remarkable nutritional and functional composition, in particular, protein (24%), dietary fiber (15%), phenols (15 – 20 GAE/g) and carotenoids (600 – 900 µg/g). These properties are given by the geographical and climatic characteristics of the region where it is collected. There are several countries recognized by their pollen production, e.g. China, United States, Japan, Spain, among others. Beekeepers use traps in the entrance of the hive where bee-pollen is collected. After the removal of foreign particles and drying, this product is ready to be marketed. However, in countries located along the equator, the absence of seasons and a constant tropical climate throughout the year favors a more rapid spoilage condition for foods with elevated water activity. The climatic conditions also trigger the proliferation of microorganisms and insects. This, added to the factor that beekeepers usually do not have adequate processing systems for bee-pollen, leads to deficiencies in the quality and safety of the product. In contrast, the Andean region of South America, lying on equator, typically has a high production of bee-pollen of up to 36 kg/year/hive, being four times higher than in countries with marked seasons. This region is also located in altitudes superior to 2500 meters above sea level, having extremes sun ultraviolet radiation all year long. As a mechanism of defense of radiation, plants produce more secondary metabolites acting as antioxidant agents, hence, plant products such as bee-pollen contain remarkable more phenolics and carotenoids than collected in other places. Considering this, the improvement of bee-pollen processing facilities by technical modifications and the implementation of an integrated cleaning and drying system for the product in an apiary in the area was proposed. The beehives were modified through the installation of alternative bee-pollen traps to avoid sources of contamination. The processing facility was modified according to considerations of Good Manufacturing Practices, implementing the combined use of a cabin dryer with temperature control and forced airflow and a greenhouse-type solar drying system. Additionally, for the separation of impurities, a cyclone type system was implemented, complementary to a screening equipment. With these modifications, a decrease in the content of impurities and the microbiological load of bee-pollen was seen from the first stages, principally with a reduction of the presence of molds and yeasts and in the number of foreign animal origin impurities. The use of the greenhouse solar dryer integrated to the cabin dryer allowed the processing of larger quantities of product with shorter waiting times in storage, reaching a moisture content of about 6% and a water activity lower than 0.6, being appropriate for the conservation of bee-pollen. Additionally, the contents of functional or nutritional compounds were not affected, even observing an increase of up to 25% in phenols content and a non-significant decrease in carotenoids content and antioxidant activity.

Keywords: beekeeping, drying, food processing, food safety

Procedia PDF Downloads 104
320 Big Data and Health: An Australian Perspective Which Highlights the Importance of Data Linkage to Support Health Research at a National Level

Authors: James Semmens, James Boyd, Anna Ferrante, Katrina Spilsbury, Sean Randall, Adrian Brown

Abstract:

‘Big data’ is a relatively new concept that describes data so large and complex that it exceeds the storage or computing capacity of most systems to perform timely and accurate analyses. Health services generate large amounts of data from a wide variety of sources such as administrative records, electronic health records, health insurance claims, and even smart phone health applications. Health data is viewed in Australia and internationally as highly sensitive. Strict ethical requirements must be met for the use of health data to support health research. These requirements differ markedly from those imposed on data use from industry or other government sectors and may have the impact of reducing the capacity of health data to be incorporated into the real time demands of the Big Data environment. This ‘big data revolution’ is increasingly supported by national governments, who have invested significant funds into initiatives designed to develop and capitalize on big data and methods for data integration using record linkage. The benefits to health following research using linked administrative data are recognised internationally and by the Australian Government through the National Collaborative Research Infrastructure Strategy Roadmap, which outlined a multi-million dollar investment strategy to develop national record linkage capabilities. This led to the establishment of the Population Health Research Network (PHRN) to coordinate and champion this initiative. The purpose of the PHRN was to establish record linkage units in all Australian states, to support the implementation of secure data delivery and remote access laboratories for researchers, and to develop the Centre for Data Linkage for the linkage of national and cross-jurisdictional data. The Centre for Data Linkage has been established within Curtin University in Western Australia; it provides essential record linkage infrastructure necessary for large-scale, cross-jurisdictional linkage of health related data in Australia and uses a best practice ‘separation principle’ to support data privacy and security. Privacy preserving record linkage technology is also being developed to link records without the use of names to overcome important legal and privacy constraint. This paper will present the findings of the first ‘Proof of Concept’ project selected to demonstrate the effectiveness of increased record linkage capacity in supporting nationally significant health research. This project explored how cross-jurisdictional linkage can inform the nature and extent of cross-border hospital use and hospital-related deaths. The technical challenges associated with national record linkage, and the extent of cross-border population movements, were explored as part of this pioneering research project. Access to person-level data linked across jurisdictions identified geographical hot spots of cross border hospital use and hospital-related deaths in Australia. This has implications for planning of health service delivery and for longitudinal follow-up studies, particularly those involving mobile populations.

Keywords: data integration, data linkage, health planning, health services research

Procedia PDF Downloads 216
319 Seawater Desalination for Production of Highly Pure Water Using a Hydrophobic PTFE Membrane and Direct Contact Membrane Distillation (DCMD)

Authors: Ahmad Kayvani Fard, Yehia Manawi

Abstract:

Qatar’s primary source of fresh water is through seawater desalination. Amongst the major processes that are commercially available on the market, the most common large scale techniques are Multi-Stage Flash distillation (MSF), Multi Effect distillation (MED), and Reverse Osmosis (RO). Although commonly used, these three processes are highly expensive down to high energy input requirements and high operating costs allied with maintenance and stress induced on the systems in harsh alkaline media. Beside that cost, environmental footprint of these desalination techniques are significant; from damaging marine eco-system, to huge land use, to discharge of tons of GHG and huge carbon footprint. Other less energy consuming techniques based on membrane separation are being sought to reduce both the carbon footprint and operating costs is membrane distillation (MD). Emerged in 1960s, MD is an alternative technology for water desalination attracting more attention since 1980s. MD process involves the evaporation of a hot feed, typically below boiling point of brine at standard conditions, by creating a water vapor pressure difference across the porous, hydrophobic membrane. Main advantages of MD compared to other commercially available technologies (MSF and MED) and specially RO are reduction of membrane and module stress due to absence of trans-membrane pressure, less impact of contaminant fouling on distillate due to transfer of only water vapor, utilization of low grade or waste heat from oil and gas industries to heat up the feed up to required temperature difference across the membrane, superior water quality, and relatively lower capital and operating cost. To achieve the objective of this study, state of the art flat-sheet cross-flow DCMD bench scale unit was designed, commissioned, and tested. The objective of this study is to analyze the characteristics and morphology of the membrane suitable for DCMD through SEM imaging and contact angle measurement and to study the water quality of distillate produced by DCMD bench scale unit. Comparison with available literature data is undertaken where appropriate and laboratory data is used to compare a DCMD distillate quality with that of other desalination techniques and standards. Membrane SEM analysis showed that the PTFE membrane used for the study has contact angle of 127º with highly porous surface supported with less porous and bigger pore size PP membrane. Study on the effect of feed solution (salinity) and temperature on water quality of distillate produced from ICP and IC analysis showed that with any salinity and different feed temperature (up to 70ºC) the electric conductivity of distillate is less than 5 μS/cm with 99.99% salt rejection and proved to be feasible and effective process capable of consistently producing high quality distillate from very high feed salinity solution (i.e. 100000 mg/L TDS) even with substantial quality difference compared to other desalination methods such as RO and MSF.

Keywords: membrane distillation, waste heat, seawater desalination, membrane, freshwater, direct contact membrane distillation

Procedia PDF Downloads 227