Search results for: hybrid solid electrolytes
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3851

Search results for: hybrid solid electrolytes

401 Impact of Terrorism as an Asymmetrical Threat on the State's Conventional Security Forces

Authors: Igor Pejic

Abstract:

The main focus of this research will be on analyzing correlative links between terrorism as an asymmetrical threat and the consequences it leaves on conventional security forces. The methodology behind the research will include qualitative research methods focusing on comparative analysis of books, scientific papers, documents and other sources, in order to deduce, explore and formulate the results of the research. With the coming of the 21st century and the rising multi-polar, new world threats quickly emerged. The realistic approach in international relations deems that relations among nations are in a constant state of anarchy since there are no definitive rules and the distribution of power varies widely. International relations are further characterized by egoistic and self-orientated human nature, anarchy or absence of a higher government, security and lack of morality. The asymmetry of power is also reflected on countries' security capabilities and its abilities to project power. With the coming of the new millennia and the rising multi-polar world order, the asymmetry of power can be also added as an important trait of the global society which consequently brought new threats. Among various others, terrorism is probably the most well-known, well-based and well-spread asymmetric threat. In today's global political arena, terrorism is used by state and non-state actors to fulfill their political agendas. Terrorism is used as an all-inclusive tool for regime change, subversion or a revolution. Although the nature of terrorist groups is somewhat inconsistent, terrorism as a security and social phenomenon has a one constant which is reflected in its political dimension. The state's security apparatus, which was embodied in the form of conventional armed forces, is now becoming fragile, unable to tackle new threats and to a certain extent outdated. Conventional security forces were designed to defend or engage an exterior threat which is more or less symmetric and visible. On the other hand, terrorism as an asymmetrical threat is a part of hybrid, special or asymmetric warfare in which specialized units, institutions or facilities represent the primary pillars of security. In today's global society, terrorism is probably the most acute problem which can paralyze entire countries and their political systems. This problem, however, cannot be engaged on an open field of battle, but rather it requires a different approach in which conventional armed forces cannot be used traditionally and their role must be adjusted. The research will try to shed light on the phenomena of modern day terrorism and to prove its correlation with the state conventional armed forces. States are obliged to adjust their security apparatus to the new realism of global society and terrorism as an asymmetrical threat which is a side-product of the unbalanced world.

Keywords: asymmetrical warfare, conventional forces, security, terrorism

Procedia PDF Downloads 238
400 Telemedicine Services in Ophthalmology: A Review of Studies

Authors: Nasim Hashemi, Abbas Sheikhtaheri

Abstract:

Telemedicine is the use of telecommunication and information technologies to provide health care services that would often not be consistently available in distant rural communities to people at these remote areas. Teleophthalmology is a branch of telemedicine that delivers eye care through digital medical equipment and telecommunications technology. Thus, teleophthalmology can overcome geographical barriers and improve quality, access, and affordability of eye health care services. Since teleophthalmology has been widespread applied in recent years, the aim of this study was to determine the different applications of teleophthalmology in the world. To this end, three bibliographic databases (Medline, ScienceDirect, Scopus) were comprehensively searched with these keywords: eye care, eye health care, primary eye care, diagnosis, detection, and screening of different eye diseases in conjunction with telemedicine, telehealth, teleophthalmology, e-services, and information technology. All types of papers were included in the study with no time restriction. We conducted the search strategies until 2015. Finally 70 articles were surveyed. We classified the results based on the’type of eye problems covered’ and ‘the type of telemedicine services’. Based on the review, from the ‘perspective of health care levels’, there are three level for eye health care as primary, secondary and tertiary eye care. From the ‘perspective of eye care services’, the main application of teleophthalmology in primary eye care was related to the diagnosis of different eye diseases such as diabetic retinopathy, macular edema, strabismus and aged related macular degeneration. The main application of teleophthalmology in secondary and tertiary eye care was related to the screening of eye problems i.e. diabetic retinopathy, astigmatism, glaucoma screening. Teleconsultation between health care providers and ophthalmologists and also education and training sessions for patients were other types of teleophthalmology in world. Real time, store–forward and hybrid methods were the main forms of the communication from the perspective of ‘teleophthalmology mode’ which is used based on IT infrastructure between sending and receiving centers. In aspect of specialists, early detection of serious aged-related ophthalmic disease in population, screening of eye disease processes, consultation in an emergency cases and comprehensive eye examination were the most important benefits of teleophthalmology. Cost-effectiveness of teleophthalmology projects resulted from reducing transportation and accommodation cost, access to affordable eye care services and receiving specialist opinions were also the main advantages of teleophthalmology for patients. Teleophthalmology brings valuable secondary and tertiary care to remote areas. So, applying teleophthalmology for detection, treatment and screening purposes and expanding its use in new applications such as eye surgery will be a key tool to promote public health and integrating eye care to primary health care.

Keywords: applications, telehealth, telemedicine, teleophthalmology

Procedia PDF Downloads 350
399 An Impregnated Active Layer Mode of Solution Combustion Synthesis as a Tool for the Solution Combustion Mechanism Investigation

Authors: Zhanna Yermekova, Sergey Roslyakov

Abstract:

Solution combustion synthesis (SCS) is the unique method which multiple times has proved itself as an effective and efficient approach for the versatile synthesis of a variety of materials. It has significant advantages such as relatively simple handling process, high rates of product synthesis, mixing of the precursors on a molecular level, and fabrication of the nanoproducts as a result. Nowadays, an overwhelming majority of solution combustion investigations performed through the volume combustion synthesis (VCS) where the entire liquid precursor is heated until the combustion self-initiates throughout the volume. Less amount of the experiments devoted to the steady-state self-propagating mode of SCS. Under the beforementioned regime, the precursor solution is dried until the gel-like media, and later on, the gel substance is locally ignited. In such a case, a combustion wave propagates in a self-sustaining mode as in conventional solid combustion synthesis. Even less attention is given to the impregnated active layer (IAL) mode of solution combustion. An IAL approach to the synthesis is implying that the solution combustion of the precursors should be initiated on the surface of the third chemical or inside the third substance. This work is aiming to emphasize an underestimated role of the impregnated active layer mode of the solution combustion synthesis for the fundamental studies of the combustion mechanisms. It also serves the purpose of popularizing the technical terms and clarifying the difference between them. In order to do so, the solution combustion synthesis of γ-FeNi (PDF#47-1417) alloy has been accomplished within short (seconds) one-step reaction of metal precursors with hexamethylenetetramine (HTMA) fuel. An idea of the special role of the Ni in a process of alloy formation was suggested and confirmed with the particularly organized set of experiments. The first set of experiments were conducted in a conventional steady-state self-propagating mode of SCS. An alloy was synthesized as a single monophasic product. In two other experiments, the synthesis was divided into two independent processes which are possible under the IAL mode of solution combustion. The sequence of the process was changed according to the equations which are describing an Experiment A and B below: Experiment A: Step 1. Fe(NO₃)₃*9H₂O + HMTA = FeO + gas products; Step 2. FeO + Ni(NO₃)₂*6H₂O + HMTA = Ni + FeO + gas products; Experiment B: Step 1. Ni(NO₃)₂*6H₂O + HMTA = Ni + gas products; Step 2. Ni + Fe(NO₃)₃*9H₂O + HMTA = Fe₃Ni₂+ traces (Ni + FeO). Based on the IAL experiment results, one can see that combustion of the Fe(NO₃)₃9H₂O on the surface of the Ni is leading to the alloy formation while presence of the already formed FeO does not affect the Ni(NO₃)₂*6H₂O + HMTA reaction in any way and Ni is the main product of the synthesis.

Keywords: alloy, hexamethylenetetramine, impregnated active layer mode, mechanism, solution combustion synthesis

Procedia PDF Downloads 111
398 Modelling and Assessment of an Off-Grid Biogas Powered Mini-Scale Trigeneration Plant with Prioritized Loads Supported by Photovoltaic and Thermal Panels

Authors: Lorenzo Petrucci

Abstract:

This paper is intended to give insight into the potential use of small-scale off-grid trigeneration systems powered by biogas generated in a dairy farm. The off-grid plant object of analysis comprises a dual-fuel Genset as well as electrical and thermal storage equipment and an adsorption machine. The loads are the different apparatus used in the dairy farm, a household where the workers live and a small electric vehicle whose batteries can also be used as a power source in case of emergency. The insertion in the plant of an adsorption machine is mainly justified by the abundance of thermal energy and the simultaneous high cooling demand associated with the milk-chilling process. In the evaluated operational scenario, our research highlights the importance of prioritizing specific small loads which cannot sustain an interrupted supply of power over time. As a consequence, a photovoltaic and thermal panel is included in the plant and is tasked with providing energy independently of potentially disruptive events such as engine malfunctioning or scarce and unstable supplies of fuels. To efficiently manage the plant an energy dispatch strategy is created in order to control the flow of energy between the power sources and the thermal and electric storages. In this article we elaborate on models of the equipment and from these models, we extract parameters useful to build load-dependent profiles of the prime movers and storage efficiencies. We show that under reasonable assumptions the analysis provides a sensible estimate of the generated energy. The simulations indicate that a Diesel Generator sized to a value 25% higher than the total electrical peak demand operates 65% of the time below the minimum acceptable load threshold. To circumvent such a critical operating mode, dump loads are added through the activation and deactivation of small resistors. In this way, the excess of electric energy generated can be transformed into useful heat. The combination of PVT and electrical storage to support the prioritized load in an emergency scenario is evaluated in two different days of the year having the lowest and highest irradiation values, respectively. The results show that the renewable energy component of the plant can successfully sustain the prioritized loads and only during a day with very low irradiation levels it also needs the support of the EVs’ battery. Finally, we show that the adsorption machine can reduce the ice builder and the air conditioning energy consumption by 40%.

Keywords: hybrid power plants, mathematical modeling, off-grid plants, renewable energy, trigeneration

Procedia PDF Downloads 157
397 On the Semantics and Pragmatics of 'Be Able To': Modality and Actualisation

Authors: Benoît Leclercq, Ilse Depraetere

Abstract:

The goal of this presentation is to shed new light on the semantics and pragmatics of be able to. It presents the results of a corpus analysis based on data from the BNC (British National Corpus), and discusses these results in light of a specific stance on the semantics-pragmatics interface taking into account recent developments. Be able to is often discussed in relation to can and could, all of which can be used to express ability. Such an onomasiological approach often results in the identification of usage constraints for each expression. In the case of be able to, it is the formal properties of the modal expression (unlike can and could, be able to has non-finite forms) that are in the foreground, and the modal expression is described as the verb that conveys future ability. Be able to is also argued to expressed actualised ability in the past (I was able/could to open the door). This presentation aims to provide a more accurate pragmatic-semantic profile of be able to, based on extensive data analysis and one that is embedded in a very explicit view on the semantics-pragmatics interface. A random sample of 3000 examples (1000 for each modal verb) extracted from the BNC was analysed to account for the following issues. First, the challenge is to identify the exact semantic range of be able to. The results show that, contrary to general assumption, be able to does not only express ability but it shares most of the root meanings usually associated with the possibility modals can and could. The data reveal that what is called opportunity is, in fact, the most frequent meaning of be able to. Second, attention will be given to the notion of actualisation. It is commonly argued that be able to is the preferred form when the residue actualises: (1) The only reason he was able to do that was because of the restriction (BNC, spoken) (2) It is only through my imaginative shuffling of the aces that we are able to stay ahead of the pack. (BNC, written) Although this notion has been studied in detail within formal semantic approaches, empirical data is crucially lacking and it is unclear whether actualisation constitutes a conventional (and distinguishing) property of be able to. The empirical analysis provides solid evidence that actualisation is indeed a conventional feature of the modal. Furthermore, the dataset reveals that be able to expresses actualised 'opportunities' and not actualised 'abilities'. In the final part of this paper, attention will be given to the theoretical implications of the empirical findings, and in particular to the following paradox: how can the same expression encode both modal meaning (non-factual) and actualisation (factual)? It will be argued that this largely depends on one's conception of the semantics-pragmatics interface, and that this need not be an issue when actualisation (unlike modality) is analysed as a generalised conversational implicature and thus is considered part of the conventional pragmatic layer of be able to.

Keywords: Actualisation, Modality, Pragmatics, Semantics

Procedia PDF Downloads 105
396 Evaluation of Different Cropping Systems under Organic, Inorganic and Integrated Production Systems

Authors: Sidramappa Gaddnakeri, Lokanath Malligawad

Abstract:

Any kind of research on production technology of individual crop / commodity /breed has not brought sustainability or stability in crop production. The sustainability of the system over years depends on the maintenance of the soil health. Organic production system includes use of organic manures, biofertilizers, green manuring for nutrient supply and biopesticides for plant protection helps to sustain the productivity even under adverse climatic condition. The study was initiated to evaluate the performance of different cropping systems under organic, inorganic and integrated production systems at The Institute of Organic Farming, University of Agricultural Sciences, Dharwad (Karnataka-India) under ICAR Network Project on Organic Farming. The trial was conducted for four years (2013-14 to 2016-17) on fixed site. Five cropping systems viz., sequence cropping of cowpea – safflower, greengram– rabi sorghum, maize-bengalgram, sole cropping of pigeonpea and intercropping of groundnut + cotton were evaluated under six nutrient management practices. The nutrient management practices are NM1 (100% Organic farming (Organic manures equivalent to 100% N (Cereals/cotton) or 100% P2O5 (Legumes), NM2 (75% Organic farming (Organic manures equivalent to 75% N (Cereals/cotton) or 100% P2O5 (Legumes) + Cow urine and Vermi-wash application), NM3 (Integrated farming (50% Organic + 50% Inorganic nutrients, NM4 (Integrated farming (75% Organic + 25% Inorganic nutrients, NM5 (100% Inorganic farming (Recommended dose of inorganic fertilizers)) and NM6 (Recommended dose of inorganic fertilizers + Recommended rate of farm yard manure (FYM). Among the cropping systems evaluated for different production systems indicated that the Groundnut + Hybrid cotton (2:1) intercropping system found more remunerative as compared to Sole pigeonpea cropping system, Greengram-Sorghum sequence cropping system, Maize-Chickpea sequence cropping system and Cowpea-Safflower sequence cropping system irrespective of the production systems. Production practices involving application of recommended rates of fertilizers + recommended rates of organic manures (Farmyard manure) produced higher net monetary returns and higher B:C ratio as compared to integrated production system involving application of 50 % organics + 50 % inorganic and application of 75 % organics + 25 % inorganic and organic production system only Both the two organic production systems viz., 100 % Organic production system (Organic manures equivalent to 100 % N (Cereals/cotton) or 100 % P2O5 (Legumes) and 75 % Organic production system (Organic manures equivalent to 75 % N (Cereals) or 100 % P2O5 (Legumes) + Cow urine and Vermi-wash application) are found to be on par. Further, integrated production system involving application of organic manures and inorganic fertilizers found more beneficial over organic production systems.

Keywords: cropping systems, production systems, cowpea, safflower, greengram, pigeonpea, groundnut, cotton

Procedia PDF Downloads 171
395 Solar Cell Packed and Insulator Fused Panels for Efficient Cooling in Cubesat and Satellites

Authors: Anand K. Vinu, Vaishnav Vimal, Sasi Gopalan

Abstract:

All spacecraft components have a range of allowable temperatures that must be maintained to meet survival and operational requirements during all mission phases. Due to heat absorption, transfer, and emission on one side, the satellite surface presents an asymmetric temperature distribution and causes a change in momentum, which can manifest in spinning and non-spinning satellites in different manners. This problem can cause orbital decays in satellites which, if not corrected, will interfere with its primary objective. The thermal analysis of any satellite requires data from the power budget for each of the components used. This is because each of the components has different power requirements, and they are used at specific times in an orbit. There are three different cases that are run, one is the worst operational hot case, the other one is the worst non-operational cold case, and finally, the operational cold case. Sunlight is a major source of heating that takes place on the satellite. The way in which it affects the spacecraft depends on the distance from the Sun. Any part of a spacecraft or satellite facing the Sun will absorb heat (a net gain), and any facing away will radiate heat (a net loss). We can use the state-of-the-art foldable hybrid insulator/radiator panel. When the panels are opened, that particular side acts as a radiator for dissipating the heat. Here the insulator, in our case, the aerogel, is sandwiched with solar cells and radiator fins (solar cells outside and radiator fins inside). Each insulated side panel can be opened and closed using actuators depending on the telemetry data of the CubeSat. The opening and closing of the panels are dependent on the special code designed for this particular application, where the computer calculates where the Sun is relative to the satellites. According to the data obtained from the sensors, the computer decides which panel to open and by how many degrees. For example, if the panels open 180 degrees, the solar panels will directly face the Sun, in turn increasing the current generator of that particular panel. One example is when one of the corners of the CubeSat is facing or if more than one side is having a considerable amount of sun rays incident on it. Then the code will analyze the optimum opening angle for each panel and adjust accordingly. Another means of cooling is the passive way of cooling. It is the most suitable system for a CubeSat because of its limited power budget constraints, low mass requirements, and less complex design. Other than this fact, it also has other advantages in terms of reliability and cost. One of the passive means is to make the whole chase act as a heat sink. For this, we can make the entire chase out of heat pipes and connect the heat source to this chase with a thermal strap that transfers the heat to the chassis.

Keywords: passive cooling, CubeSat, efficiency, satellite, stationary satellite

Procedia PDF Downloads 77
394 Flipping the Script: Opportunities, Challenges, and Threats of a Digital Revolution in Higher Education

Authors: James P. Takona

Abstract:

In a world that is experiencing sharp digital transformations guided by digital technologies, the potential of technology to drive transformation and evolution in the higher is apparent. Higher education is facing a paradigm shift that exposes susceptibilities and threats to fully online programs in the face of post-Covid-19 trends of commodification. This historical moment is likely to be remembered as a critical turning point from analog to digital degree-focused learning modalities, where the default became the pivot point of competition between higher education institutions. Fall 2020 marks a significant inflection point in higher education as students, educators, and government leaders scrutinize higher education's price and value propositions through the new lens of traditional lecture halls versus multiple digitized delivery modes. Online education has since tiled the way for a pedagogical shift in how teachers teach and students learn. The incremental growth of online education in the west can now be attributed to the increasing patronage among students, faculty, and institution administrators. More often than not, college instructors assume paraclete roles in this learning mode, while students become active collaborators and no longer passive learners. This paper offers valuable discernments into the threats, challenges, and opportunities of a massive digital revolution in servicing degree programs. To view digital instruction and learning demands for instructional practices that revolve around collaborative work, engaging students in learning activities, and an engagement that promotes active efforts to solicit strong connections between course activities and expected learning pace for all students. Appropriate digital technologies demand instructors and students need prior solid skills. Need for the use of digital technology to support instruction and learning, intelligent tutoring offers great promise, and failures at implementing digital learning may not improve outcomes for specific student populations. Digital learning benefits students differently depending on their circumstances and background and those of the institution and/or program. Students have alternative options, access to the convenience of learning anytime and anywhere, and the possibility of acquiring and developing new skills leading to lifelong learning.

Keywords: digi̇tized learning, digital education, collaborative work, high education, online education, digitize delivery

Procedia PDF Downloads 65
393 Optimizing Cell Culture Performance in an Ambr15 Microbioreactor Using Dynamic Flux Balance and Computational Fluid Dynamic Modelling

Authors: William Kelly, Sorelle Veigne, Xianhua Li, Zuyi Huang, Shyamsundar Subramanian, Eugene Schaefer

Abstract:

The ambr15™ bioreactor is a single-use microbioreactor for cell line development and process optimization. The ambr system offers fully automatic liquid handling with the possibility of fed-batch operation and automatic control of pH and oxygen delivery. With operating conditions for large scale biopharmaceutical production properly scaled down, micro bioreactors such as the ambr15™ can potentially be used to predict the effect of process changes such as modified media or different cell lines. In this study, gassing rates and dilution rates were varied for a semi-continuous cell culture system in the ambr15™ bioreactor. The corresponding changes to metabolite production and consumption, as well as cell growth rate and therapeutic protein production were measured. Conditions were identified in the ambr15™ bioreactor that produced metabolic shifts and specific metabolic and protein production rates also seen in the corresponding larger (5 liter) scale perfusion process. A Dynamic Flux Balance model was employed to understand and predict the metabolic changes observed. The DFB model-predicted trends observed experimentally, including lower specific glucose consumption when CO₂ was maintained at higher levels (i.e. 100 mm Hg) in the broth. A Computational Fluid Dynamic (CFD) model of the ambr15™ was also developed, to understand transfer of O₂ and CO₂ to the liquid. This CFD model predicted gas-liquid flow in the bioreactor using the ANSYS software. The two-phase flow equations were solved via an Eulerian method, with population balance equations tracking the size of the gas bubbles resulting from breakage and coalescence. Reasonable results were obtained in that the Carbon Dioxide mass transfer coefficient (kLa) and the air hold up increased with higher gas flow rate. Volume-averaged kLa values at 500 RPM increased as the gas flow rate was doubled and matched experimentally determined values. These results form a solid basis for optimizing the ambr15™, using both CFD and FBA modelling approaches together, for use in microscale simulations of larger scale cell culture processes.

Keywords: cell culture, computational fluid dynamics, dynamic flux balance analysis, microbioreactor

Procedia PDF Downloads 257
392 Assessment of Heavy Metal Contamination in Soil and Groundwater Due to Leachate Migration from an Open Dumping Site

Authors: Kali Prasad Sarma

Abstract:

Indiscriminate disposal of municipal solid waste (MSW) in open dumping site is a common scenario in developing countries like India which poses a risk to the environment as well as human health. The objective of the present investigation was to find out the concentration of heavy metals (Pb, Cr, Ni, Mn, Zn, Cu, and Cd) and other physicochemical parameters of leachate and soil collected from an open dumping site of Tezpur town, Assam, India and its associated potential ecological risk. Tezpur is an urban agglomeration coming under the category of Class I UAs/Towns with a population of 105,377 as per data released by Government of India for Census 2011. Impact of the leachate on the groundwater was also addressed in our study. The concentrations of heavy metals were determined using ICP-OES. Energy dispersive X-Ray (SEM-EDS) microanalysis was also conducted to see the presence of the studied metals in the soil. X-Ray diffraction analysis (XRD) and Fourier Transform Infrared (FTIR) spectroscopy were also used to identify dominant minerals present in the soil samples. The trend of measured heavy metals in the soil samples was found in the following order: Mn > Pb > Cu > Zn > Cr > Ni > Cd. The assessment of heavy metal contamination in the soil was carried out by calculating enrichment factor (EF), geo-accumulation index (Igeo), contamination factor (Cfi), degree of contamination (Cd), pollution load index (PLI) and ecological risk factor (Eri). The study showed that the concentrations of Pb, Cu, and Cd were much higher than their respective average shale value and the EF of the soil samples depicted very severe enrichment for Pb, Cu, and Cd; moderate enrichment for Cr and Zn. Calculated Igeo values indicated that the soil is moderate to strongly contaminated with Pb and uncontaminated to moderately contaminated with Cd and Cu. The Cfi value for Pb indicates a very strong contamination level of the metal in the soil. The Cfi values for Cu and Cd were 2.37 and 1.65 respectively indicating moderate contamination level. To apportion the possible sources of heavy metal contamination in soil, principal components analysis (PCA) has been adopted. From the leachate, heavy metals are accumulated at the dumping site soil which could easily percolate through the soil and reach the groundwater. The possible relation of groundwater contamination due to leachate percolation was examined by analyzing the heavy metal concentrations in groundwater with respect to distance from the dumping site. The concentrations of Cd and Pb in groundwater (at a distance of 20m from dumping site) exceeded the permissible limit for drinking water as set by WHO. Occurrence of elevated concentration of potentially toxic heavy metals such as Pb and Cd in groundwater and soil are much environmental concern as it is detrimental to human health and ecosystem.

Keywords: groundwater, heavy metal contamination, leachate, open dumping site

Procedia PDF Downloads 87
391 The Toxicity of Doxorubicin Connected with Nanotransporters

Authors: Iva Blazkova, Amitava Moulick, Vedran Milosavljevic, Pavel Kopel, Marketa Vaculovicova, Vojtech Adam, Rene Kizek

Abstract:

Doxorubicin is one of the most commonly used and the most effective chemotherapeutic drugs. This antracycline drug isolated from the bacteria Streptomyces peuceticus var. caesius is sold under the trade name Adriamycin (hydroxydaunomycin, hydroxydaunorubicin). Doxorubicin is used in single therapy to treat hematological malignancies (blood cancers, leukaemia, lymphoma), many types of carcinoma (solid tumors) and soft tissue sarcomas. It has many serious side effects like nausea and vomiting, hair lost, myelosupression, oral mucositis, skin reactions and redness, but the most serious one is the cardiotoxicity. Because of the risk of heart attack and congestive heart failure, the total dose administered to patients has to be accurately monitored. With the aim to lower the side effects and to targeted delivery of doxorubicin into the tumor tissue, the different nanoparticles are studied. The drug can be bound on a surface of nanoparticle, encapsulated in the inner cavity, or incorporated into the structure of nanoparticle. Among others, carbon nanoparticles (graphene, carbon nanotubes, fullerenes) are highly studied. Besides the number of inorganic nanoparticles, a great potential exhibit also organic ones mainly lipid-based and polymeric nanoparticle. The aim of this work was to perform a toxicity study of free doxorubicin compared to doxorubicin conjugated with various nanotransporters. The effect of liposomes, fullerenes, graphene, and carbon nanotubes on the toxicity was analyzed. As a first step, the binding efficacy of between doxorubicin and the nanotransporter was determined. The highest efficacy was detected in case of liposomes (85% of applied drug was encapsulated) followed by graphene, carbon nanotubes and fullerenes. For the toxicological studies, the chicken embryos incubated under controlled conditions (37.5 °C, 45% rH, rotation every 2 hours) were used. In 7th developmental day of chicken embryos doxorubicin or doxorubicin-nanotransporter complex was applied on the chorioallantoic membrane of the eggs and the viability was analyzed every day till the 17th developmental day. Then the embryos were extracted from the shell and the distribution of doxorubicin in the body was analyzed by measurement of organs extracts using laser induce fluorescence detection. The chicken embryo mortality caused by free doxorubicin (30%) was significantly lowered by using the conjugation with nanomaterials. The highest accumulation of doxorubicin and doxorubicin nanotransporter complexes was observed in the liver tissue

Keywords: doxorubicin, chicken embryos, nanotransporters, toxicity

Procedia PDF Downloads 431
390 Intelligent Control of Agricultural Farms, Gardens, Greenhouses, Livestock

Authors: Vahid Bairami Rad

Abstract:

The intelligentization of agricultural fields can control the temperature, humidity, and variables affecting the growth of agricultural products online and on a mobile phone or computer. Smarting agricultural fields and gardens is one of the best and best ways to optimize agricultural equipment and has a 100 percent direct effect on the growth of plants and agricultural products and farms. Smart farms are the topic that we are going to discuss today, the Internet of Things and artificial intelligence. Agriculture is becoming smarter every day. From large industrial operations to individuals growing organic produce locally, technology is at the forefront of reducing costs, improving results and ensuring optimal delivery to market. A key element to having a smart agriculture is the use of useful data. Modern farmers have more tools to collect intelligent data than in previous years. Data related to soil chemistry also allows people to make informed decisions about fertilizing farmland. Moisture meter sensors and accurate irrigation controllers have made the irrigation processes to be optimized and at the same time reduce the cost of water consumption. Drones can apply pesticides precisely on the desired point. Automated harvesting machines navigate crop fields based on position and capacity sensors. The list goes on. Almost any process related to agriculture can use sensors that collect data to optimize existing processes and make informed decisions. The Internet of Things (IoT) is at the center of this great transformation. Internet of Things hardware has grown and developed rapidly to provide low-cost sensors for people's needs. These sensors are embedded in IoT devices with a battery and can be evaluated over the years and have access to a low-power and cost-effective mobile network. IoT device management platforms have also evolved rapidly and can now be used securely and manage existing devices at scale. IoT cloud services also provide a set of application enablement services that can be easily used by developers and allow them to build application business logic. Focus on yourself. These development processes have created powerful and new applications in the field of Internet of Things, and these programs can be used in various industries such as agriculture and building smart farms. But the question is, what makes today's farms truly smart farms? Let us put this question in another way. When will the technologies associated with smart farms reach the point where the range of intelligence they provide can exceed the intelligence of experienced and professional farmers?

Keywords: food security, IoT automation, wireless communication, hybrid lifestyle, arduino Uno

Procedia PDF Downloads 32
389 Parents and Stakeholders’ Perspectives on Early Reading Intervention Implemented as a Curriculum for Children with Learning Disabilities

Authors: Bander Mohayya Alotaibi

Abstract:

The valuable partnerships between parents and teachers may develop positive and effective interactions between home and school. This will help these stakeholders share information and resources regarding student academics during ongoing interactions. Thus, partnerships will build a solid foundation for both families and schools to help children succeed in school. Parental involvement can be seen as an effective tool that can change homes and communities and not just schools’ systems. Seeking parents and stakeholders’ attitudes toward learning and learners can help schools design a curriculum. Subsequently, this information can be used to find ways to help improve the academic performance of students, especially in low performing schools. There may be some conflicts when designing curriculum. In addition, designing curriculum might bring more educational expectations to all the sides. There is a lack of research that targets the specific attitude of parents toward specific concepts on curriculum contents. More research is needed to study the perspective that parents of children with learning disabilities (LD) have regarding early reading curriculum. Parents and stakeholders’ perspectives on early reading intervention implemented as a curriculum for children with LD was studied through an advanced quantitative research. The purpose of this study seeks to understand stakeholders and parents’ perspectives of key concepts and essential early reading skills that impact the design of curriculum that will serve as an intervention for early struggler readers who have LD. Those concepts or stages include phonics, phonological awareness, and reading fluency as well as strategies used in house by parents. A survey instrument was used to gather the data. Participants were recruited through 29 schools and districts of the metropolitan area of the northern part of Saudi Arabia. Participants were stakeholders including parents of children with learning disability. Data were collected using distribution of paper and pen survey to schools. Psychometric properties of the instrument were evaluated for the validity and reliability of the survey; face validity, content validity, and construct validity including an Exploratory Factor Analysis were used to shape and reevaluate the structure of the instrument. Multivariate analysis of variance (MANOVA) used to find differences between the variables. The study reported the results of the perspectives of stakeholders toward reading strategies, phonics, phonological awareness, and reading fluency. Also, suggestions and limitations are discussed.

Keywords: stakeholders, learning disability, early reading, perspectives, parents, intervention, curriculum

Procedia PDF Downloads 133
388 The Effect of Organic Matter Maturation and Porosity Evolution on Methane Storage Potential in Shale-Gas Reservoirs

Authors: T. Topór, A. Derkowski, P. Ziemiański

Abstract:

Formation of organic matter (OM)-hosted nanopores upon thermal maturation are one of the key factor controlling methane storage potential in unconventional shale-gas reservoirs. In this study, the subcritical CO₂ and N₂ gas adsorption measurements combined with scanning electron microscopy and supercritical methane adsorption have been used to characterize pore system and methane storage potential in black shales from the Baltic Basin (Poland). The samples were collected from a virtually equivalent Llandovery strata across the basin and represent a complete digenetic sequence, from thermally immature to overmature. The results demonstrate that the thermal maturation is a dominant mechanism controlling the formation of OM micro- and mesopores in the Baltic Basin shales. The formation of micro- and mesopores occurs in the oil window (vitrinite reflectance; leavedVR; ~0.5-0.9%) as a result of oil expulsion from kerogenleft OM highly porous. The generated hydrocarbons then turn into solid bitumen causing pore blocking and substantial decrease in micro- and mesopore volume in late-mature shales (VR ~0.9-1.2%). Both micro- and mesopores are regenerated in a middle of the catagenesis range (VR 1.4-1.9%) due to secondary cracking of OM and gas formation. The micropore volume in investigated shales is almost exclusively controlled by the OM content. The contribution of clay minerals to micropore volume is insignificant and masked by a strong contribution from OM. Methane adsorption capacity in the Baltic Basin shales is predominantly controlled by microporous OM with pores < 1.5 nm. The mesopore volume (2-50 nm) and mesopore surface area have no effect on methane sorption behavior. The adsorbed methane density equivalent, calculated as absolute methane adsorption divided by micropore volume, reviled a decrease of the methane loading potential in micropores with increasing maturity. The highest methane loading potential in micropores is observed for OM before metagenesis (VR < 2%), where the adsorbed methane density equivalent is greater than the density of liquid methane. This implies that, in addition to physical adsorption, absorption of methane in OM may occur before metagenesis. After OM content reduction using NaOCl solution methane adoption capacity substantially decreases, suggesting significantly greater adsorption potential for OM microstructure than for the clay minerals matrix.

Keywords: maturation, methane sorption, organic matter, porosity, shales

Procedia PDF Downloads 220
387 Convectory Policing-Reconciling Historic and Contemporary Models of Police Service Delivery

Authors: Mark Jackson

Abstract:

Description: This paper is based on an theoretical analysis of the efficacy of the dominant model of policing in western jurisdictions. Those results are then compared with a similar analysis of a traditional reactive model. It is found that neither model provides for optimal delivery of services. Instead optimal service can be achieved by a synchronous hybrid model, termed the Convectory Policing approach. Methodology and Findings: For over three decades problem oriented policing (PO) has been the dominant model for western police agencies. Initially based on the work of Goldstein during the 1970s the problem oriented framework has spawned endless variants and approaches, most of which embrace a problem solving rather than a reactive approach to policing. This has included the Area Policing Concept (APC) applied in many smaller jurisdictions in the USA, the Scaled Response Policing Model (SRPM) currently under trial in Western Australia and the Proactive Pre-Response Approach (PPRA) which has also seen some success. All of these, in some way or another, are largely based on a model that eschews a traditional reactive model of policing. Convectory Policing (CP) is an alternative model which challenges the underpinning assumptions which have seen proliferation of the PO approach in the last three decades and commences by questioning the economics on which PO is based. It is argued that in essence, the PO relies on an unstated, and often unrecognised assumption that resources will be available to meet demand for policing services, while at the same time maintaining the capacity to deploy staff to develop solutions to the problems which were ultimately manifested in those same calls for service. The CP model relies on the observations from a numerous western jurisdictions to challenge the validity of that underpinning assumption, particularly in fiscally tight environment. In deploying staff to pursue and develop solutions to underpinning problems, there is clearly an opportunity cost. Those same staff cannot be allocated to alternative duties while engaged in a problem solution role. At the same time, resources in use responding to calls for service are unavailable, while committed to that role, to pursue solutions to the problems giving rise to those same calls for service. The two approaches, reactive and PO are therefore dichotomous. One cannot be optimised while the other is being pursued. Convectory Policing is a pragmatic response to the schism between the competing traditional and contemporary models. If it is not possible to serve either model with any real rigour, it becomes necessary to taper an approach to deliver specific outcomes against which success or otherwise might be measured. CP proposes that a structured roster-driven approach to calls for service, combined with the application of what is termed a resource-effect response capacity has the potential to resolve the inherent conflict between traditional and models of policing and the expectations of the community in terms of community policing based problem solving models.

Keywords: policing, reactive, proactive, models, efficacy

Procedia PDF Downloads 460
386 Other End of the Leash: The Volunteer Handlers Perspective of Animal-Assisted Interventions

Authors: Julie A. Carberry, Victor Maddalena

Abstract:

Animal-Assisted Interventions (AAIs) have existed in various forms for centuries. In the past 30 years, there has been a dramatic increase in popularity. AAIs are now part of the lives of persons of all ages in many types of institutions. Anecdotal evidence of the benefits of AAIs have led to widespread adoption, yet there remains a lack of solid research base for support. The research question was, what are the lived experiences of AAI volunteer handlers are? An interpretive phenomenological methodology was used for this qualitative study. Data were collected from 1 - 2 hour-long semi-structured interviews and 1 observational field visit. All interviews were conducted, transcribed, and coded for themes by the principal investigator. Participants must have been an active St. John Ambulance Therapy Dog Program volunteer for a least one year. In total, 14 volunteer handlers, along with some of their dogs, were included. The St. John Ambulance is a not for profit organization that provides training and community services to Canadians. The Therapy Dog Program is 1 of the 4 nationally recognized core community service programs. The program incorporates dogs in the otherwise traditional therapeutic intervention of friendly visitation with clients. The lack of formal objectives and goals, and a trained therapist defines the program as an Animal-Assisted Activity (AAA), which is a type of AAI. Since the animals incorporated are dogs, the program is specifically a Canine-Assisted Activity (CAA), which is a type of Canine-Assisted Intervention (CAI). Six themes emerged from the analysis of the data: (a) a win-win-win situation for all parties involved – volunteer handlers, clients, and the dogs, (b) being on the other end of the leash: functions of the role of volunteer handler, (c) the importance of socialization: from spreading smiles to creating meaningful connections, (d) the role of the dog: initiating interaction and providing comfort, (e) an opportunity to feel good and destress, and (f) altruism versus personal rewards. Other insights were found regarding the program, clients, and staff. Possible implications from this research include increased organizational recruitment and retention of volunteer handlers and as well as increased support for CAAs and other CAIs that incorporate teams of volunteer handlers and their dogs. This support could, in turn, add overall support for the acceptance and broad implementation of AAIs as an alternative and or complementary non-pharmaceutical therapeutic intervention.

Keywords: animal-assisted activity, animal-assisted intervention, canine-assisted activity, canine-assisted intervention, perspective, qualitative, volunteer handler

Procedia PDF Downloads 122
385 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language

Authors: Wenjun Hou, Marek Perkowski

Abstract:

The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.

Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language

Procedia PDF Downloads 166
384 Phytoremediation Alternative for Landfill Leachate Sludges Doña Juana Bogotá D.C. Colombia Treatment

Authors: Pinzón Uribe Luis Felipe, Chávez Porras Álvaro, Ruge Castellanos Liliana Constanza

Abstract:

According to global data, solid waste management of has low economic investment for its management in underdeveloped countries; being the main factor the advanced technologies acknowledge for proper operation and at the same time the technical development. Has been evidenced that communities have a distorted perception of the role and legalized final destinations for waste or "Landfill" places specific management; influenced primarily by their physical characteristics and the information that the media provide of these, as well as their wrong association with "open dumps". One of the major inconveniences in these landfills is the leachate sludge management from treatment plants; as this exhibit a composition highly contaminating (physical, chemical and biological) for the natural environment due to improper handling and disposal. This is the case Landfill Doña Juana (RSDJ), Bogotá, Colombia, considered among the largest in South America; where management problems have persisted for decades, since its creation being definitive on the concept that society has acquired about this form of waste disposal and improper leachate handling. Within this research process for treating phytoremediation alternatives were determined by using plants that are able to degrade heavy metals contained in these; allowing the resulting sludge to be used as a seal in the final landfill cover; within a restoration process, providing option to solve the landscape contamination problem, as well as in the communities perception and conflicts that generates landfill. For the project chemical assays were performed in sludge leachate that allowed the characterization of metals such as chromium (Cr), lead (Pb), arsenic (As) and mercury (Hg), in order to meet the amount in the biosolids regard to the provisions of the USEPA 40 CFR 503. The evaluations showed concentrations of 102.2 mg / kg of Cr, 0.49 mg / kg Pb, 0.390 mg / kg of As and 0.104 mg / kg of Hg; being lower than of the standards. A literature review on native plant species suitable for an alternative process of phytoremediation, these metals degradation capable was developed. Concluding that among them, Vetiveria zizanioides, Eichhornia crassipes and Limnobium laevigatum, for their hiperacumulativas in their leaves, stems and roots characteristics may allow these toxic elements reduction of in the environment, improving the outlook for disposal.

Keywords: health, filling slurry of leachate, heavy metals, phytoremediation

Procedia PDF Downloads 307
383 Hierarchical Zeolites as Catalysts for Cyclohexene Epoxidation Reactions

Authors: Agnieszka Feliczak-Guzik, Paulina Szczyglewska, Izabela Nowak

Abstract:

A catalyst-assisted oxidation reaction is one of the key reactions exploited by various industries. Their conductivity yields essential compounds and intermediates, such as alcohols, epoxides, aldehydes, ketones, and organic acids. Researchers are devoting more and more attention to developing active and selective materials that find application in many catalytic reactions, such as cyclohexene epoxidation. This reaction yields 1,2-epoxycyclohexane and 1,2-diols as the main products. These compounds are widely used as intermediates in the perfume industry and synthesizing drugs and lubricants. Hence, our research aimed to use hierarchical zeolites modified with transition metal ions, e.g., Nb, V, and Ta, in the epoxidation reaction of cyclohexene using microwaveheating. Hierarchical zeolites are materials with secondary porosity, mainly in the mesoporous range, compared to microporous zeolites. In the course of the research, materials based on two commercial zeolites, with Faujasite (FAU) and Zeolite Socony Mobil-5 (ZSM-5) structures, were synthesized and characterized by various techniques, such as X-ray diffraction (XRD), transmission electron microscopy (TEM), scanning electron microscopy (SEM), and low-temperature nitrogen adsorption/desorption isotherms. The materials obtained were then used in a cyclohexene epoxidation reaction, which was carried out as follows: catalyst (0.02 g), cyclohexene (0.1 cm3), acetonitrile (5 cm3) and dihydrogen peroxide (0.085 cm3) were placed in a suitable glass reaction vessel with a magnetic stirrer inside in a microwave reactor. Reactions were carried out at 45° C for 6 h (samples were taken every 1 h). The reaction mixtures were filtered to separate the liquid products from the solid catalyst and then transferred to 1.5 cm3 vials for chromatographic analysis. The test techniques confirmed the acquisition of additional secondary porosity while preserving the structure of the commercial zeolite (XRD and low-temperature nitrogen adsorption/desorption isotherms). The results of the activity of the hierarchical catalyst modified with niobium in the cyclohexene epoxidation reaction indicate that the conversion of cyclohexene, after 6 h of running the process, is about 70%. As the main product of the reaction, 2-cyclohexanediol was obtained (selectivity > 80%). In addition to the mentioned product, adipic acid, cyclohexanol, cyclohex-2-en-1-one, and 1,2-epoxycyclohexane were also obtained. Furthermore, in a blank test, no cyclohexene conversion was obtained after 6 h of reaction. Acknowledgments The work was carried out within the project “Advanced biocomposites for tomorrow’s economy BIOG-NET,” funded by the Foundation for Polish Science from the European Regional Development Fund (POIR.04.04.00-00-1792/18-00.

Keywords: epoxidation, oxidation reactions, hierarchical zeolites, synthesis

Procedia PDF Downloads 60
382 Study on the Geometric Similarity in Computational Fluid Dynamics Calculation and the Requirement of Surface Mesh Quality

Authors: Qian Yi Ooi

Abstract:

At present, airfoil parameters are still designed and optimized according to the scale of conventional aircraft, and there are still some slight deviations in terms of scale differences. However, insufficient parameters or poor surface mesh quality is likely to occur if these small deviations are embedded in a future civil aircraft with a size that is quite different from conventional aircraft, such as a blended-wing-body (BWB) aircraft with future potential, resulting in large deviations in geometric similarity in computational fluid dynamics (CFD) simulations. To avoid this situation, the study on the CFD calculation on the geometric similarity of airfoil parameters and the quality of the surface mesh is conducted to obtain the ability of different parameterization methods applied on different airfoil scales. The research objects are three airfoil scales, including the wing root and wingtip of conventional civil aircraft and the wing root of the giant hybrid wing, used by three parameterization methods to compare the calculation differences between different sizes of airfoils. In this study, the constants including NACA 0012, a Reynolds number of 10 million, an angle of attack of zero, a C-grid for meshing, and the k-epsilon (k-ε) turbulence model are used. The experimental variables include three airfoil parameterization methods: point cloud method, B-spline curve method, and class function/shape function transformation (CST) method. The airfoil dimensions are set to 3.98 meters, 17.67 meters, and 48 meters, respectively. In addition, this study also uses different numbers of edge meshing and the same bias factor in the CFD simulation. Studies have shown that with the change of airfoil scales, different parameterization methods, the number of control points, and the meshing number of divisions should be used to improve the accuracy of the aerodynamic performance of the wing. When the airfoil ratio increases, the most basic point cloud parameterization method will require more and larger data to support the accuracy of the airfoil’s aerodynamic performance, which will face the severe test of insufficient computer capacity. On the other hand, when using the B-spline curve method, average number of control points and meshing number of divisions should be set appropriately to obtain higher accuracy; however, the quantitative balance cannot be directly defined, but the decisions should be made repeatedly by adding and subtracting. Lastly, when using the CST method, it is found that limited control points are enough to accurately parameterize the larger-sized wing; a higher degree of accuracy and stability can be obtained by using a lower-performance computer.

Keywords: airfoil, computational fluid dynamics, geometric similarity, surface mesh quality

Procedia PDF Downloads 201
381 Scaling out Sustainable Land Use Systems in Colombia: Some Insights and Implications from Two Regional Case Studies

Authors: Martha Lilia Del Rio Duque, Michelle Bonatti, Katharina Loehr, Marcos Lana, Tatiana Rodriguez, Stefan Sieber

Abstract:

Nowadays, most agricultural practices can reduce the ability of ecosystems to provide goods and services. To enhance environmentally friendly food production and to maximize social and economic benefits, sustainable land use systems (SLUS) are one of the most critical strategies increasingly/strongly promoted by donors organizations, international agencies, and policymakers. This process involves the question of how SLUS can be scaled out also large-scale landscapes and not merely isolated experiments. As SLUS are context-specific strategies, diffusion and replication of successful SLUS in Colombia required the identification of main factors that facilitate this scaling out process. We applied a case study approach to investigate the scaling out process of SLUS in cocoa and livestock sector within peacebuilding territories in Colombia, specifically, in Cesar and Caqueta region. These two regions are contrasting, but both have a current trend of increasing land degradation. Presently in Colombia, Caqueta is one of the most deforested departments, and Cesar has some most degraded soils. Following a qualitative research approach, 19 semi-structured interviews and 2 focus groups were conducted with agroforestry experts in both regions to analyze (1) what does it mean a sustainable land use system in Cocoa/Livestock, specifically in Caqueta or Cesar and (2) to identify the key elements at the level of the following dimensions: biophysical, economic and profitability, market, social, policy and institutions that can explain how and why SLUS are replicated and spread among more producers. The Interviews were coded and analyzed using MAXQDA to identify, analyze and report patterns (themes) within data. As the results show, key themes, among which: premium market, solid regional markets and price stability, water availability and management, generational renewal, land use knowledge and diversification, producer organization and certifications are crucial to understand how the SLUS can have an impact across large-scale landscapes and how the scaling out process can be set up best in order to be successful across different contexts. The analysis further reveals which key factors might affect SLUS efficiency.

Keywords: agroforestry, cocoa sector, Colombia, livestock sector, sustainable land use system

Procedia PDF Downloads 133
380 Estimation of Scour Using a Coupled Computational Fluid Dynamics and Discrete Element Model

Authors: Zeinab Yazdanfar, Dilan Robert, Daniel Lester, S. Setunge

Abstract:

Scour has been identified as the most common threat to bridge stability worldwide. Traditionally, scour around bridge piers is calculated using the empirical approaches that have considerable limitations and are difficult to generalize. The multi-physic nature of scouring which involves turbulent flow, soil mechanics and solid-fluid interactions cannot be captured by simple empirical equations developed based on limited laboratory data. These limitations can be overcome by direct numerical modeling of coupled hydro-mechanical scour process that provides a robust prediction of bridge scour and valuable insights into the scour process. Several numerical models have been proposed in the literature for bridge scour estimation including Eulerian flow models and coupled Euler-Lagrange models incorporating an empirical sediment transport description. However, the contact forces between particles and the flow-particle interaction haven’t been taken into consideration. Incorporating collisional and frictional forces between soil particles as well as the effect of flow-driven forces on particles will facilitate accurate modeling of the complex nature of scour. In this study, a coupled Computational Fluid Dynamics and Discrete Element Model (CFD-DEM) has been developed to simulate the scour process that directly models the hydro-mechanical interactions between the sediment particles and the flowing water. This approach obviates the need for an empirical description as the fundamental fluid-particle, and particle-particle interactions are fully resolved. The sediment bed is simulated as a dense pack of particles and the frictional and collisional forces between particles are calculated, whilst the turbulent fluid flow is modeled using a Reynolds Averaged Navier Stocks (RANS) approach. The CFD-DEM model is validated against experimental data in order to assess the reliability of the CFD-DEM model. The modeling results reveal the criticality of particle impact on the assessment of scour depth which, to the authors’ best knowledge, hasn’t been considered in previous studies. The results of this study open new perspectives to the scour depth and time assessment which is the key to manage the failure risk of bridge infrastructures.

Keywords: bridge scour, discrete element method, CFD-DEM model, multi-phase model

Procedia PDF Downloads 111
379 High Physical Properties of Biochar Issued from Cashew Nut Shell to Adsorb Mycotoxins (Aflatoxins and Ochratoxine A) and Its Effects on Toxigenic Molds

Authors: Abderahim Ahmadou, Alfredo Napoli, Noel Durand, Didier Montet

Abstract:

Biochar is a microporous and adsorbent solid carbon product obtained from the pyrolysis of various organic materials (biomass, agricultural waste). Biochar is distinguished from vegetable charcoal by its manufacture methods. Biochar is used as the amendment in soils to give them favorable characteristics under certain conditions, i.e., absorption of water and its release at low speed. Cashew nuts shell from Mali is usually discarded on land by local processors or burnt as a mean for waste management. The burning of this biomass poses serious socio-environmental problems including greenhouse gas emission and accumulation of tars and soot on houses closed to factories, leading to neighbor complaints. Some mycotoxins as aflatoxins are carcinogenic compounds resulting from the secondary metabolism of molds that develop on plants in the field and during their conservation. They are found at high level on some seeds and nuts in Africa. Ochratoxin A, member of mycotoxins, is produced by various species of Aspergillus and Penicillium. Human exposure to Ochratoxin A can occur through consumption of contaminated food products, particularly contaminated grain, as well as coffee, wine grapes. We showed that cashew shell biochars produced at 400, 600 and 800°C adsorbed aflatoxins (B1, B2, G1, G2) at 100% by filtration (rapid contact) as well as by stirring (long contact). The average percentage of adsorption of Ochratoxin A was 35% by filtration and 80% by stirring. The duration of the biochar-mycotoxin contact was a significant parameter. The effect of biochar was also tested on two strains of toxigenic molds: Aspergillus parasiticus (producers of Aflatoxins) and Aspergillus carbonarius (producers of Ochratoxins). The growth of the strain Aspergillus carbonarius was inhibited at up to 60% by the biochar at 600°C. An opposite effect to the inhibition was observed on Aspergillus parasiticus using the same biochar. In conclusion, we observed that biochar adsorbs mycotoxins: Aflatoxins and Ochratoxin A to different degrees; 100% adsorption of aflatoxins under all conditions (filtration and stirring) and adsorption of Ochratoxin A varied depending on the type of biochar and the experiment conditions (35% by filtration and 85% by stirring). The effects of biochar at 600 °C on the toxigenic molds: Aspergillus parasiticus and Aspergillus carbonarius, varied according to the experimental conditions and the strains. We observed an opposite effect on the growth with an inhibition of Aspergillus carbonarius up to 60% and a stimulated growth of Aspergillus parasiticus.

Keywords: biochar, cashew nut shell, mycotoxins, toxicogenic molds

Procedia PDF Downloads 151
378 Dual-Phase High Entropy (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅) BxCy Ceramics Produced by Spark Plasma Sintering

Authors: Ana-Carolina Feltrin, Daniel Hedman, Farid Akhtar

Abstract:

High entropy ceramic (HEC) materials are characterized by their compositional disorder due to different metallic element atoms occupying the cation position and non-metal elements occupying the anion position. Several studies have focused on the processing and characterization of high entropy carbides and high entropy borides, as these HECs present interesting mechanical and chemical properties. A few studies have been published on HECs containing two non-metallic elements in the composition. Dual-phase high entropy (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅)BxCy ceramics with different amounts of x and y, (0.25 HfC + 0.25 ZrC + 0.25 VC + 0.25 TiB₂), (0.25 HfC + 0.25 ZrC + 0.25 VB2 + 0.25 TiB₂) and (0.25 HfC + 0.25 ZrB2 + 0.25 VB2 + 0.25 TiB₂) were sintered from boride and carbide precursor powders using SPS at 2000°C with holding time of 10 min, uniaxial pressure of 50 MPa and under Ar atmosphere. The sintered specimens formed two HEC phases: a Zr-Hf rich FCC phase and a Ti-V HCP phase, and both phases contained all the metallic elements from 5-50 at%. Phase quantification analysis of XRD data revealed that the molar amount of hexagonal phase increased with increased mole fraction of borides in the starting powders, whereas cubic FCC phase increased with increased carbide in the starting powders. SPS consolidated (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅)BC0.5 and (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅)B1.5C0.25 had respectively 94.74% and 88.56% relative density. (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅)B0.5C0.75 presented the highest relative density of 95.99%, with Vickers hardness of 26.58±1.2 GPa for the borides phase and 18.29±0.8 GPa for the carbides phase, which exceeded the reported hardness values reported in the literature for high entropy ceramics. The SPS sintered specimens containing lower boron and higher carbon presented superior properties even though the metallic composition in each phase was similar to other compositions investigated. Dual-phase high entropy (Ti₀.₂₅V₀.₂₅Zr₀.₂₅H₀.₂₅)BxCy ceramics were successfully fabricated in a boride-carbide solid solution and the amount of boron and carbon was shown to influence the phase fraction, hardness of phases, and density of the consolidated HECs. The microstructure and phase formation was highly dependent on the amount of non-metallic elements in the composition and not only the molar ratio between metals when producing high entropy ceramics with more than one anion in the sublattice. These findings show the importance of further studies about the optimization of the ratio between C and B for further improvements in the properties of dual-phase high entropy ceramics.

Keywords: high-entropy ceramics, borides, carbides, dual-phase

Procedia PDF Downloads 152
377 Advances in Design Decision Support Tools for Early-stage Energy-Efficient Architectural Design: A Review

Authors: Maryam Mohammadi, Mohammadjavad Mahdavinejad, Mojtaba Ansari

Abstract:

The main driving force for increasing movement towards the design of High-Performance Buildings (HPB) are building codes and rating systems that address the various components of the building and their impact on the environment and energy conservation through various methods like prescriptive methods or simulation-based approaches. The methods and tools developed to meet these needs, which are often based on building performance simulation tools (BPST), have limitations in terms of compatibility with the integrated design process (IDP) and HPB design, as well as use by architects in the early stages of design (when the most important decisions are made). To overcome these limitations in recent years, efforts have been made to develop Design Decision Support Systems, which are often based on artificial intelligence. Numerous needs and steps for designing and developing a Decision Support System (DSS), which complies with the early stages of energy-efficient architecture design -consisting of combinations of different methods in an integrated package- have been listed in the literature. While various review studies have been conducted in connection with each of these techniques (such as optimizations, sensitivity and uncertainty analysis, etc.) and their integration of them with specific targets; this article is a critical and holistic review of the researches which leads to the development of applicable systems or introduction of a comprehensive framework for developing models complies with the IDP. Information resources such as Science Direct and Google Scholar are searched using specific keywords and the results are divided into two main categories: Simulation-based DSSs and Meta-simulation-based DSSs. The strengths and limitations of different models are highlighted, two general conceptual models are introduced for each category and the degree of compliance of these models with the IDP Framework is discussed. The research shows movement towards Multi-Level of Development (MOD) models, well combined with early stages of integrated design (schematic design stage and design development stage), which are heuristic, hybrid and Meta-simulation-based, relies on Big-real Data (like Building Energy Management Systems Data or Web data). Obtaining, using and combining of these data with simulation data to create models with higher uncertainty, more dynamic and more sensitive to context and culture models, as well as models that can generate economy-energy-efficient design scenarios using local data (to be more harmonized with circular economy principles), are important research areas in this field. The results of this study are a roadmap for researchers and developers of these tools.

Keywords: integrated design process, design decision support system, meta-simulation based, early stage, big data, energy efficiency

Procedia PDF Downloads 143
376 Oligarchic Transitions within the Tunisian Autocratic Authoritarian System and the Struggle for Democratic Transformation: Before and beyond the 2010 Jasmine Revolution

Authors: M. Moncef Khaddar

Abstract:

This paper focuses mainly on a contextualized understanding of ‘autocratic authoritarianism’ in Tunisia without approaching its peculiarities in reference to the ideal type of capitalist-liberal democracy but rather analysing it as a Tunisian ‘civilian dictatorship’. This is reminiscent, to some extent, of the French ‘colonial authoritarianism’ in parallel with the legacy of the traditional formal monarchic absolutism. The Tunisian autocratic political system is here construed as a state manufactured nationalist-populist authoritarianism associated with a de facto presidential single party, two successive autocratic presidents and their subservient autocratic elites who ruled with an iron fist the de-colonialized ‘liberated nation’ that came to be subjected to a large scale oppression and domination under the new Tunisian Republic. The diachronic survey of Tunisia’s autocratic authoritarian system covers the early years of autocracy, under the first autocratic president Bourguiba, 1957-1987, as well as the different stages of its consolidation into a police-security state under the second autocratic president, Ben Ali, 1987-2011. Comparing the policies of authoritarian regimes, within what is identified synchronically as a bi-cephalous autocratic system, entails an in-depth study of the two autocrats, who ruled Tunisia for more than half a century, as modern adaptable autocrats. This is further supported by an exploration of the ruling authoritarian autocratic elites who played a decisive role in shaping the undemocratic state-society relations, under the 1st and 2nd President, and left an indelible mark, structurally and ideologically, on Tunisian polity. Emphasis is also put on the members of the governmental and state-party institutions and apparatuses that kept circulating and recycling from one authoritarian regime to another, and from the first ‘founding’ autocrat to his putschist successor who consolidated authoritarian stability, political continuity and autocratic governance. The reconfiguration of Tunisian political life, in the post-autocratic era, since 2011 will be analysed. This will be scrutinized, especially in light of the unexpected return of many high-profile figures and old guards of the autocratic authoritarian apparatchiks. How and why were, these public figures, from an autocratic era, able to return in a supposedly post-revolutionary moment? Finally, while some continue to celebrate the putative exceptional success of ‘democratic transition’ in Tunisia, within a context of ‘unfinished revolution’, others remain perplexed in the face of a creeping ‘oligarchic transition’ to a ‘hybrid regime’, characterized rather by elites’ reformist tradition than a bottom-up genuine democratic ‘change’. This latter is far from answering the 2010 ordinary people’s ‘uprisings’ and ‘aspirations, for ‘Dignity, Liberty and Social Justice’.

Keywords: authoritarianism, autocracy, democratization, democracy, populism, transition, Tunisia

Procedia PDF Downloads 123
375 The Impact of HKUST-1 Metal-Organic Framework Pretreatment on Dynamic Acetaldehyde Adsorption

Authors: M. François, L. Sigot, C. Vallières

Abstract:

Volatile Organic Compounds (VOCs) are a real health issue, particularly in domestic indoor environments. Among these VOCs, acetaldehyde is frequently monitored in dwellings ‘air, especially due to smoking and spontaneous emissions from the new wall and soil coverings. It is responsible for respiratory complaints and is classified as possibly carcinogenic to humans. Adsorption processes are commonly used to remove VOCs from the air. Metal-Organic Frameworks (MOFs) are a promising type of material for high adsorption performance. These hybrid porous materials composed of metal inorganic clusters and organic ligands are interesting thanks to their high porosity and surface area. The HKUST-1 (also referred to as MOF-199) is a copper-based MOF with the formula [Cu₃(BTC)₂(H₂O)₃]n (BTC = benzene-1,3,5-tricarboxylate) and exhibits unsaturated metal sites that can be attractive sites for adsorption. The objective of this study is to investigate the impact of HKUST-1 pretreatment on acetaldehyde adsorption. Thus, dynamic adsorption experiments were conducted in 1 cm diameter glass column packed with 2 cm MOF bed height. MOF were sieved to 630 µm - 1 mm. The feed gas (Co = 460 ppmv ± 5 ppmv) was obtained by diluting a 1000 ppmv acetaldehyde gas cylinder in air. The gas flow rate was set to 0.7 L/min (to guarantee a suitable linear velocity). Acetaldehyde concentration was monitored online by gas chromatography coupled with a flame ionization detector (GC-FID). Breakthrough curves must allow to understand the interactions between the MOF and the pollutant as well as the impact of the HKUST-1 humidity in the adsorption process. Consequently, different MOF water content conditions were tested, from a dry material with 7 % water content (dark blue color) to water saturated state with approximately 35 % water content (turquoise color). The rough material – without any pretreatment – containing 30 % water serves as a reference. First, conclusions can be drawn from the comparison of the evolution of the ratio of the column outlet concentration (C) on the inlet concentration (Co) as a function of time for different HKUST-1 pretreatments. The shape of the breakthrough curves is significantly different. The saturation of the rough material is slower (20 h to reach saturation) than that of the dried material (2 h). However, the breakthrough time defined for C/Co = 10 % appears earlier in the case of the rough material (0.75 h) compared to the dried HKUST-1 (1.4 h). Another notable difference is the shape of the curve before the breakthrough at 10 %. An abrupt increase of the outlet concentration is observed for the material with the lower humidity in comparison to a smooth increase for the rough material. Thus, the water content plays a significant role on the breakthrough kinetics. This study aims to understand what can explain the shape of the breakthrough curves associated to the pretreatments of HKUST-1 and which mechanisms take place in the adsorption process between the MOF, the pollutant, and the water.

Keywords: acetaldehyde, dynamic adsorption, HKUST-1, pretreatment influence

Procedia PDF Downloads 217
374 A Methodology of Using Fuzzy Logics and Data Analytics to Estimate the Life Cycle Indicators of Solar Photovoltaics

Authors: Thor Alexis Sazon, Alexander Guzman-Urbina, Yasuhiro Fukushima

Abstract:

This study outlines the method of how to develop a surrogate life cycle model based on fuzzy logic using three fuzzy inference methods: (1) the conventional Fuzzy Inference System (FIS), (2) the hybrid system of Data Analytics and Fuzzy Inference (DAFIS), which uses data clustering for defining the membership functions, and (3) the Adaptive-Neuro Fuzzy Inference System (ANFIS), a combination of fuzzy inference and artificial neural network. These methods were demonstrated with a case study where the Global Warming Potential (GWP) and the Levelized Cost of Energy (LCOE) of solar photovoltaic (PV) were estimated using Solar Irradiation, Module Efficiency, and Performance Ratio as inputs. The effects of using different fuzzy inference types, either Sugeno- or Mamdani-type, and of changing the number of input membership functions to the error between the calibration data and the model-generated outputs were also illustrated. The solution spaces of the three methods were consequently examined with a sensitivity analysis. ANFIS exhibited the lowest error while DAFIS gave slightly lower errors compared to FIS. Increasing the number of input membership functions helped with error reduction in some cases but, at times, resulted in the opposite. Sugeno-type models gave errors that are slightly lower than those of the Mamdani-type. While ANFIS is superior in terms of error minimization, it could generate solutions that are questionable, i.e. the negative GWP values of the Solar PV system when the inputs were all at the upper end of their range. This shows that the applicability of the ANFIS models highly depends on the range of cases at which it was calibrated. FIS and DAFIS generated more intuitive trends in the sensitivity runs. DAFIS demonstrated an optimal design point wherein increasing the input values does not improve the GWP and LCOE anymore. In the absence of data that could be used for calibration, conventional FIS presents a knowledge-based model that could be used for prediction. In the PV case study, conventional FIS generated errors that are just slightly higher than those of DAFIS. The inherent complexity of a Life Cycle study often hinders its widespread use in the industry and policy-making sectors. While the methodology does not guarantee a more accurate result compared to those generated by the Life Cycle Methodology, it does provide a relatively simpler way of generating knowledge- and data-based estimates that could be used during the initial design of a system.

Keywords: solar photovoltaic, fuzzy logic, inference system, artificial neural networks

Procedia PDF Downloads 142
373 Measuring the Economic Impact of Cultural Heritage: Comparative Analysis of the Multiplier Approach and the Value Chain Approach

Authors: Nina Ponikvar, Katja Zajc Kejžar

Abstract:

While the positive impacts of heritage on a broad societal spectrum have long been recognized and measured, the economic effects of the heritage sector are often less visible and frequently underestimated. At macro level, economic effects are usually studied based on one of the two mainstream approach, i.e. either the multiplier approach or the value chain approach. Consequently, there is limited comparability of the empirical results due to the use of different methodological approach in the literature. Furthermore, it is also not clear on which criteria the used approach was selected. Our aim is to bring the attention to the difference in the scope of effects that are encompassed by the two most frequent methodological approaches to valuation of economic effects of cultural heritage on macroeconomic level, i.e. the multiplier approach and the value chain approach. We show that while the multiplier approach provides a systematic, theory-based view of economic impacts but requires more data and analysis, the value chain approach has less solid theoretical foundations and depends on the availability of appropriate data to identify the contribution of cultural heritage to other sectors. We conclude that the multiplier approach underestimates the economic impact of cultural heritage, mainly due to the narrow definition of cultural heritage in the statistical classification and the inability to identify part of the contribution of cultural heritage that is hidden in other sectors. Yet it is not possible to clearly determine whether the value chain method overestimates or underestimates the actual economic impact of cultural heritage since there is a risk that the direct effects are overestimated and double counted, but not all indirect and induced effects are considered. Accordingly, these two approaches are not substitutes but rather complementary. Consequently, a direct comparison of the estimated impacts is not possible and should not be done due to the different scope. To illustrate the difference of the impact assessment of the cultural heritage, we apply both approaches to the case of Slovenia in the 2015-2022 period and measure the economic impact of cultural heritage sector in terms of turnover, gross value added and employment. The empirical results clearly show that the estimation of the economic impact of a sector using the multiplier approach is more conservative, while the estimates based on value added capture a much broader range of impacts. According to the multiplier approach, each euro in cultural heritage sector generates an additional 0.14 euros in indirect effects and an additional 0.44 euros in induced effects. Based on the value-added approach, the indirect economic effect of the “narrow” heritage sectors is amplified by the impact of cultural heritage activities on other sectors. Accordingly, every euro of sales and every euro of gross value added in the cultural heritage sector generates approximately 6 euros of sales and 4 to 5 euros of value added in other sectors. In addition, each employee in the cultural heritage sector is linked to 4 to 5 jobs in other sectors.

Keywords: economic value of cultural heritage, multiplier approach, value chain approach, indirect effects, slovenia

Procedia PDF Downloads 58
372 Bi-Component Particle Segregation Studies in a Spiral Concentrator Using Experimental and CFD Techniques

Authors: Prudhvinath Reddy Ankireddy, Narasimha Mangadoddy

Abstract:

Spiral concentrators are commonly used in various industries, including mineral and coal processing, to efficiently separate materials based on their density and size. In these concentrators, a mixture of solid particles and fluid (usually water) is introduced as feed at the top of a spiral channel. As the mixture flows down the spiral, centrifugal and gravitational forces act on the particles, causing them to stratify based on their density and size. Spiral flows exhibit complex fluid dynamics, and interactions involve multiple phases and components in the process. Understanding the behavior of these phases within the spiral concentrator is crucial for achieving efficient separation. An experimental bi-component particle interaction study is conducted in this work utilizing magnetite (heavier density) and silica (lighter density) with different proportions processed in the spiral concentrator. The observation separation reveals that denser particles accumulate towards the inner region of the spiral trough, while a significant concentration of lighter particles are found close to the outer edge. The 5th turn of the spiral trough is partitioned into five zones to achieve a comprehensive distribution analysis of bicomponent particle segregation. Samples are then gathered from these individual streams using an in-house sample collector, and subsequent analysis is conducted to assess component segregation. Along the trough, there was a decline in the concentration of coarser particles, accompanied by an increase in the concentration of lighter particles. The segregation pattern indicates that the heavier coarse component accumulates in the inner zone, whereas the lighter fine component collects in the outer zone. The middle zone primarily consists of heavier fine particles and lighter coarse particles. The zone-wise results reveal that there is a significant fraction of segregation occurs in inner and middle zones. Finer magnetite and silica particles predominantly accumulate in outer zones with the smallest fraction of segregation. Additionally, numerical simulations are also carried out using the computational fluid dynamics (CFD) model based on the volume of fluid (VOF) approach incorporating the RSM turbulence model. The discrete phase model (DPM) is employed for particle tracking, thereby understanding the particle segregation of magnetite and silica along the spiral trough.

Keywords: spiral concentrator, bi-component particle segregation, computational fluid dynamics, discrete phase model

Procedia PDF Downloads 44