Search results for: combined mathematical MCDM
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4236

Search results for: combined mathematical MCDM

126 Application of Large Eddy Simulation-Immersed Boundary Volume Penalization Method for Heat and Mass Transfer in Granular Layers

Authors: Artur Tyliszczak, Ewa Szymanek, Maciej Marek

Abstract:

Flow through granular materials is important to a vast array of industries, for instance in construction industry where granular layers are used for bulkheads and isolators, in chemical engineering and catalytic reactors where large surfaces of packed granular beds intensify chemical reactions, or in energy production systems, where granulates are promising materials for heat storage and heat transfer media. Despite the common usage of granulates and extensive research performed in this field, phenomena occurring between granular solid elements or between solids and fluid are still not fully understood. In the present work we analyze the heat exchange process between the flowing medium (gas, liquid) and solid material inside the granular layers. We consider them as a composite of isolated solid elements and inter-granular spaces in which a gas or liquid can flow. The structure of the layer is controlled by shapes of particular granular elements (e.g., spheres, cylinders, cubes, Raschig rings), its spatial distribution or effective characteristic dimension (total volume or surface area). We will analyze to what extent alteration of these parameters influences on flow characteristics (turbulent intensity, mixing efficiency, heat transfer) inside the layer and behind it. Analysis of flow inside granular layers is very complicated because the use of classical experimental techniques (LDA, PIV, fibber probes) inside the layers is practically impossible, whereas the use of probes (e.g. thermocouples, Pitot tubes) requires drilling of holes inside the solid material. Hence, measurements of the flow inside granular layers are usually performed using for instance advanced X-ray tomography. In this respect, theoretical or numerical analyses of flow inside granulates seem crucial. Application of discrete element methods in combination with the classical finite volume/finite difference approaches is problematic as a mesh generation process for complex granular material can be very arduous. A good alternative for simulation of flow in complex domains is an immersed boundary-volume penalization (IB-VP) in which the computational meshes have simple Cartesian structure and impact of solid objects on the fluid is mimicked by source terms added to the Navier-Stokes and energy equations. The present paper focuses on application of the IB-VP method combined with large eddy simulation (LES). The flow solver used in this work is a high-order code (SAILOR), which was used previously in various studies, including laminar/turbulent transition in free flows and also for flows in wavy channels, wavy pipes and over various shape obstacles. In these cases a formal order of approximation turned out to be in between 1 and 2, depending on the test case. The current research concentrates on analyses of the flows in dense granular layers with elements distributed in a deterministic regular manner and validation of the results obtained using LES-IB method and body-fitted approach. The comparisons are very promising and show very good agreement. It is found that the size, number of elements and their distribution have huge impact on the obtained results. Ordering of the granular elements (or lack of it) affects both the pressure drop and efficiency of the heat transfer as it significantly changes mixing process.

Keywords: granular layers, heat transfer, immersed boundary method, numerical simulations

Procedia PDF Downloads 104
125 Most Recent Lifespan Estimate for the Itaipu Hydroelectric Power Plant Computed by Using Borland and Miller Method and Mass Balance in Brazil, Paraguay

Authors: Anderson Braga Mendes

Abstract:

Itaipu Hydroelectric Power Plant is settled on the Paraná River, which is a natural boundary between Brazil and Paraguay; thus, the facility is shared by both countries. Itaipu Power Plant is the biggest hydroelectric generator in the world, and provides clean and renewable electrical energy supply for 17% and 76% of Brazil and Paraguay, respectively. The plant started its generation in 1984. It counts on 20 Francis turbines and has installed capacity of 14,000 MWh. Its historic generation record occurred in 2016 (103,098,366 MWh), and since the beginning of its operation until the last day of 2016 the plant has achieved the sum of 2,415,789,823 MWh. The distinct sedimentologic aspects of the drainage area of Itaipu Power Plant, from its stretch upstream (Porto Primavera and Rosana dams) to downstream (Itaipu dam itself), were taken into account in order to best estimate the increase/decrease in the sediment yield by using data from 2001 to 2016. Such data are collected through a network of 14 automatic sedimentometric stations managed by the company itself and operating in an hourly basis, covering an area of around 136,000 km² (92% of the incremental drainage area of the undertaking). Since 1972, a series of lifespan studies for the Itaipu Power Plant have been made, being first assessed by Sir Hans Albert Einstein, at the time of the feasibility studies for the enterprise. From that date onwards, eight further studies were made through the last 44 years aiming to confer more precision upon the estimates based on more updated data sets. From the analysis of each monitoring station, it was clearly noticed strong increase tendencies in the sediment yield through the last 14 years, mainly in the Iguatemi, Ivaí, São Francisco Falso and Carapá Rivers, the latter situated in Paraguay, whereas the others are utterly in Brazilian territory. Five lifespan scenarios considering different sediment yield tendencies were simulated with the aid of the softwares SEDIMENT and DPOSIT, both developed by the author of the present work. Such softwares thoroughly follow the Borland & Miller methodology (empirical method of area-reduction). The soundest scenario out of the five ones under analysis indicated a lifespan foresight of 168 years, being the reservoir only 1.8% silted by the end of 2016, after 32 years of operation. Besides, the mass balance in the reservoir (water inflows minus outflows) between 1986 and 2016 shows that 2% of the whole Itaipu lake is silted nowadays. Owing to the convergence of both results, which were acquired by using different methodologies and independent input data, it is worth concluding that the mathematical modeling is satisfactory and calibrated, thus assigning credibility to this most recent lifespan estimate.

Keywords: Borland and Miller method, hydroelectricity, Itaipu Power Plant, lifespan, mass balance

Procedia PDF Downloads 247
124 Modelling and Assessment of an Off-Grid Biogas Powered Mini-Scale Trigeneration Plant with Prioritized Loads Supported by Photovoltaic and Thermal Panels

Authors: Lorenzo Petrucci

Abstract:

This paper is intended to give insight into the potential use of small-scale off-grid trigeneration systems powered by biogas generated in a dairy farm. The off-grid plant object of analysis comprises a dual-fuel Genset as well as electrical and thermal storage equipment and an adsorption machine. The loads are the different apparatus used in the dairy farm, a household where the workers live and a small electric vehicle whose batteries can also be used as a power source in case of emergency. The insertion in the plant of an adsorption machine is mainly justified by the abundance of thermal energy and the simultaneous high cooling demand associated with the milk-chilling process. In the evaluated operational scenario, our research highlights the importance of prioritizing specific small loads which cannot sustain an interrupted supply of power over time. As a consequence, a photovoltaic and thermal panel is included in the plant and is tasked with providing energy independently of potentially disruptive events such as engine malfunctioning or scarce and unstable supplies of fuels. To efficiently manage the plant an energy dispatch strategy is created in order to control the flow of energy between the power sources and the thermal and electric storages. In this article we elaborate on models of the equipment and from these models, we extract parameters useful to build load-dependent profiles of the prime movers and storage efficiencies. We show that under reasonable assumptions the analysis provides a sensible estimate of the generated energy. The simulations indicate that a Diesel Generator sized to a value 25% higher than the total electrical peak demand operates 65% of the time below the minimum acceptable load threshold. To circumvent such a critical operating mode, dump loads are added through the activation and deactivation of small resistors. In this way, the excess of electric energy generated can be transformed into useful heat. The combination of PVT and electrical storage to support the prioritized load in an emergency scenario is evaluated in two different days of the year having the lowest and highest irradiation values, respectively. The results show that the renewable energy component of the plant can successfully sustain the prioritized loads and only during a day with very low irradiation levels it also needs the support of the EVs’ battery. Finally, we show that the adsorption machine can reduce the ice builder and the air conditioning energy consumption by 40%.

Keywords: hybrid power plants, mathematical modeling, off-grid plants, renewable energy, trigeneration

Procedia PDF Downloads 154
123 Operation Cycle Model of ASz62IR Radial Aircraft Engine

Authors: M. Duk, L. Grabowski, P. Magryta

Abstract:

Today's very important element relating to air transport is the environment impact issues. Nowadays there are no emissions standards for turbine and piston engines used in air transport. However, it should be noticed that the environmental effect in the form of exhaust gases from aircraft engines should be as small as possible. For this purpose, R&D centers often use special software to simulate and to estimate the negative effect of engine working process. For cooperation between the Lublin University of Technology and the Polish aviation company WSK "PZL-KALISZ" S.A., to achieve more effective operation of the ASz62IR engine, one of such tools have been used. The AVL Boost software allows to perform 1D simulations of combustion process of piston engines. ASz62IR is a nine-cylinder aircraft engine in a radial configuration. In order to analyze the impact of its working process on the environment, the mathematical model in the AVL Boost software have been made. This model contains, among others, model of the operation cycle of the cylinders. This model was based on a volume change in combustion chamber according to the reciprocating movement of a piston. The simplifications that all of the pistons move identically was assumed. The changes in cylinder volume during an operating cycle were specified. Those changes were important to determine the energy balance of a cylinder in an internal combustion engine which is fundamental for a model of the operating cycle. The calculations for cylinder thermodynamic state were based on the first law of thermodynamics. The change in the mass in the cylinder was calculated from the sum of inflowing and outflowing masses including: cylinder internal energy, heat from the fuel, heat losses, mass in cylinder, cylinder pressure and volume, blowdown enthalpy, evaporation heat etc. The model assumed that the amount of heat released in combustion process was calculated from the pace of combustion, using Vibe model. For gas exchange, it was also important to consider heat transfer in inlet and outlet channels because of much higher values there than for flow in a straight pipe. This results from high values of heat exchange coefficients and temperature coefficients near valves and valve seats. A Zapf modified model of heat exchange was used. To use the model with the flight scenarios, the impact of flight altitude on engine performance has been analyze. It was assumed that the pressure and temperature at the inlet and outlet correspond to the values resulting from the model for International Standard Atmosphere (ISA). Comparing this model of operation cycle with the others submodels of the ASz62IR engine, it could be noticed, that a full analysis of the performance of the engine, according to the ISA conditions, can be made. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under

Keywords: aviation propulsion, AVL Boost, engine model, operation cycle, aircraft engine

Procedia PDF Downloads 265
122 Kuwait Environmental Remediation Program: Fresh Groudwater Risk Assessement from Tarcrete Material across the Raudhatain and Sabriyah Oil Fields, North Kuwait

Authors: Nada Al-Qallaf, Aisha Al-Barood, Djamel Lekmine, Srinivasan Vedhapuri

Abstract:

Kuwait Oil Company (KOC) under the supervision of Kuwait National Focal Point (KNFP) is planning to remediate 26 million (M) m3 of oil-contaminated soil in oil fields of Kuwait as a direct and indirect fallout of the Gulf War during 1990-1991. This project is funded by the United Nations Compensation Commission (UNCC) under the Kuwait Environmental Remediation Program (KERP). Oil-contamination of the soil occurred due to the destruction of the oil wells and spilled crude oil across the land surface and created ‘oil lakes’ in low lying land. Aerial fall-out from oil spray and combustion products from oil fires combined with the sand and gravel on the ground surface to form a layer of hardened ‘Tarcrete’. The unique fresh groundwater lenses present in the Raudhatain and Sabriya subsurface areas had been impacted by the discharge and/or spills of dissolved petroleum constituents. These fresh groundwater aquifers were used for drinking water purposes until 1990, prior to invasion. This has significantly damages altered the landscape, ecology and habitat of the flora and fauna and in Kuwait Desert. Under KERP, KOC is fully responsible for the planning and execution of the remediation and restoration projects in KOC oil fields. After the initial recommendation of UNCC to construct engineered landfills for containment and disposal of heavily contaminated soils, two landfills were constructed, one in North Kuwait and another in South East Kuwait of capacity 1.7 million m3 and 0.5 million m3 respectively. KOC further developed the Total Remediation Strategy in conjunction with KNFP and has obtained UNCC approval. The TRS comprises of elements such as Risk Based Approach (RBA), Bioremediation of low Contaminated Soil levels, Remediation Treatment Technologies, Sludge Disposal via Beneficial Recycling or Re-use and Engineered landfills for Containment of untreatable materials. Risk Based Assessment as a key component to avoid any unnecessary remedial works, where it can be demonstrated that human health and the environment are sufficiently protected in the absence of active remediation. This study demonstrates on the risks of tarcrete materials spread over areas 20 Km2 on the fresh Ground water lenses/catchment located beneath the Sabriyah and Raudhatain oil fields in North Kuwait. KOC’s primary objective is to provide justification of using RBA, to support a case with the Kuwait regulators to leave the tarcrete material in place, rather than seek to undertake large-scale removal and remediation. The large-scale coverage of the tarcrete in the oil fields and perception that the residual contamination associated with this source is present in an environmentally sensitive area essentially in ground water resource. As part of this assessment, conceptual site model (CSM) and complete risk-based and fate and transport modelling was carried out which includes derivation of site-specific assessment criteria (SSAC) and quantification of risk to identified waters resource receptors posed by tarcrete impacted areas. The outcome of this assessment was determined that the residual tarcrete deposits across the site area shall not create risks to fresh groundwater resources and the remedial action to remove and remediate the surficial tarcrete deposits is not warranted.

Keywords: conceptual site model, fresh groundwater, oil-contaminated soil, tarcrete, risk based assessment

Procedia PDF Downloads 153
121 Space Tourism Pricing Model Revolution from Time Independent Model to Time-Space Model

Authors: Kang Lin Peng

Abstract:

Space tourism emerged in 2001 and became famous in 2021, following the development of space technology. The space market is twisted because of the excess demand. Space tourism is currently rare and extremely expensive, with biased luxury product pricing, which is the seller’s market that consumers can not bargain with. Spaceship companies such as Virgin Galactic, Blue Origin, and Space X have been charged space tourism prices from 200 thousand to 55 million depending on various heights in space. There should be a reasonable price based on a fair basis. This study aims to derive a spacetime pricing model, which is different from the general pricing model on the earth’s surface. We apply general relativity theory to deduct the mathematical formula for the space tourism pricing model, which covers the traditional time-independent model. In the future, the price of space travel will be different from current flight travel when space travel is measured in lightyear units. The pricing of general commodities mainly considers the general equilibrium of supply and demand. The pricing model considers risks and returns with the dependent time variable as acceptable when commodities are on the earth’s surface, called flat spacetime. Current economic theories based on the independent time scale in the flat spacetime do not consider the curvature of spacetime. Current flight services flying the height of 6, 12, and 19 kilometers are charging with a pricing model that measures time coordinate independently. However, the emergence of space tourism is flying heights above 100 to 550 kilometers that have enlarged the spacetime curvature, which means tourists will escape from a zero curvature on the earth’s surface to the large curvature of space. Different spacetime spans should be considered in the pricing model of space travel to echo general relativity theory. Intuitively, this spacetime commodity needs to consider changing the spacetime curvature from the earth to space. We can assume the value of each spacetime curvature unit corresponding to the gradient change of each Ricci or energy-momentum tensor. Then we know how much to spend by integrating the spacetime from the earth to space. The concept is adding a price p component corresponding to the general relativity theory. The space travel pricing model degenerates into a time-independent model, which becomes a model of traditional commodity pricing. The contribution is that the deriving of the space tourism pricing model will be a breakthrough in philosophical and practical issues for space travel. The results of the space tourism pricing model extend the traditional time-independent flat spacetime mode. The pricing model embedded spacetime as the general relativity theory can better reflect the rationality and accuracy of space travel on the universal scale. The universal scale from independent-time scale to spacetime scale will bring a brand-new pricing concept for space traveling commodities. Fair and efficient spacetime economics will also bring to humans’ travel when we can travel in lightyear units in the future.

Keywords: space tourism, spacetime pricing model, general relativity theory, spacetime curvature

Procedia PDF Downloads 93
120 Stabilizing Additively Manufactured Superalloys at High Temperatures

Authors: Keivan Davami, Michael Munther, Lloyd Hackel

Abstract:

The control of properties and material behavior by implementing thermal-mechanical processes is based on mechanical deformation and annealing according to a precise schedule that will produce a unique and stable combination of grain structure, dislocation substructure, texture, and dispersion of precipitated phases. The authors recently developed a thermal-mechanical technique to stabilize the microstructure of additively manufactured nickel-based superalloys even after exposure to high temperatures. However, the mechanism(s) that controls this stability is still under investigation. Laser peening (LP), also called laser shock peening (LSP), is a shock based (50 ns duration) post-processing technique used for extending performance levels and improving service life of critical components by developing deep levels of plastic deformation, thereby generating high density of dislocations and inducing compressive residual stresses in the surface and deep subsurface of components. These compressive residual stresses are usually accompanied with an increase in hardness and enhance the material’s resistance to surface-related failures such as creep, fatigue, contact damage, and stress corrosion cracking. While the LP process enhances the life span and durability of the material, the induced compressive residual stresses relax at high temperatures (>0.5Tm, where Tm is the absolute melting temperature), limiting the applicability of the technology. At temperatures above 0.5Tm, the compressive residual stresses relax, and yield strength begins to drop dramatically. The principal reason is the increasing rate of solid-state diffusion, which affects both the dislocations and the microstructural barriers. Dislocation configurations commonly recover by mechanisms such as climbing and recombining rapidly at high temperatures. Furthermore, precipitates coarsen, and grains grow; virtually all of the available microstructural barriers become ineffective.Our results indicate that by using “cyclic” treatments with sequential LP and annealing steps, the compressive stresses survive, and the microstructure is stable after exposure to temperatures exceeding 0.5Tm for a long period of time. When the laser peening process is combined with annealing, dislocations formed as a result of LPand precipitates formed during annealing have a complex interaction that provides further stability at high temperatures. From a scientific point of view, this research lays the groundwork for studying a variety of physical, materials science, and mechanical engineering concepts. This research could lead to metals operating at higher sustained temperatures enabling improved system efficiencies. The strengthening of metals by a variety of means (alloying, work hardening, and other processes) has been of interest for a wide range of applications. However, the mechanistic understanding of the often complex processes of interactionsbetween dislocations with solute atoms and with precipitates during plastic deformation have largely remained scattered in the literature. In this research, the elucidation of the actual mechanisms involved in the novel cyclic LP/annealing processes as a scientific pursuit is investigated through parallel studies of dislocation theory and the implementation of advanced experimental tools. The results of this research help with the validation of a novel laser processing technique for high temperature applications. This will greatly expand the applications of the laser peening technology originally devised only for temperatures lower than half of the melting temperature.

Keywords: laser shock peening, mechanical properties, indentation, high temperature stability

Procedia PDF Downloads 126
119 Governance of Social Media Using the Principles of Community Radio

Authors: Ken Zakreski

Abstract:

Regulating Canadian Facebook Groups, of a size and type, when they reach a threshold of audio video content. Consider the evolution of the Streaming Act, Parl GC Bill C-11 (44-1) and the regulations that will certainly follow. The Canadian Heritage Minister's office stipulates, "the Broadcasting Act only applies to audio and audiovisual content, not written journalism.” Governance— After 10 years, a community radio station for Gabriola Island, BC – Canadian Radio-television and Telecommunications Commission (“CRTC”) was approved but never started – became a Facebook Group “Community Bulletin Board - Life on Gabriola“ referred to as CBBlog. After CBBlog started and began to gather real traction, a member of the Group cloned the membership and ran their competing Facebook group under the banner of "free speech”. Here we see an inflection point [change of cultural stewardship] with two different mathematical results [engagement and membership growth]. Canada's telecommunication history of “portability” and “interoperability” made that Facebook Group CBBlog the better option, over broadcast FM radio for a community pandemic information sharing service for Gabriola Island, BC. A culture of ignorance flourishes in social media. Often people do not understand their own experience, or the experience of others because they do not have the concepts needed for understanding. It is thus important they are not denied concepts required for their full understanding. For example, Legislators need to know something about gay culture before they can make any decisions about it. Community Media policies and CRTC regulations are known and regulators can use that history to forge forward with regulations for internet platforms of a size and content type that reach a threshold of audio / video content. Mostly volunteer run media services, provide order of magnitude lower costs over commercial media. (Treating) Facebook Groups as new media.? Cathy Edwards, executive director of the Canadian Association of Community Television Users and Stations (“CACTUS”), calls it new media in that the distribution platform is not the issue. What does make community groups community media? Cathy responded, "... it's bylaws, articles of incorporation that state they are community media, they have accessibility, commitments to skills training, any member of the community can be a member, and there is accountability to a board of directors". Eligibility for funding through CACTUS requires these same commitments. It is risky for a community to invest into a platform as ownership has not been litigated. Is a FaceBook Group an asset of a not for profit society? The memo, from law student, Jared Hubbard summarizes, “Rights and interests in a Facebook group could, in theory, be transferred as property... This theory is currently unconfirmed by Canadian courts. “

Keywords: social media, governance, community media, Canadian radio

Procedia PDF Downloads 41
118 The Development, Composition, and Implementation of Vocalises as a Method of Technical Training for the Adult Musical Theatre Singer

Authors: Casey Keenan Joiner, Shayna Tayloe

Abstract:

Classical voice training for the novice singer has long relied on the guidance and instruction of vocalise collections, such as those written and compiled by Marchesi, Lütgen, Vaccai, and Lamperti. These vocalise collections purport to encourage healthy vocal habits and instill technical longevity in both aspiring and established singers, though their scope has long been somewhat confined to the classical idiom. For pedagogues and students specializing in other vocal genres, such as musical theatre and CCM (contemporary commercial music,) low-impact and pertinent vocal training aids are in short supply, and much of the suggested literature derives from classical methodology. While the tenants of healthy vocal production remain ubiquitous, specific stylistic needs and technical emphases differ from genre to genre and may require a specified extension of vocal acuity. As musical theatre continues to grow in popularity at both the professional and collegiate levels, the need for specialized training grows as well. Pedagogical literature geared specifically towards musical theatre (MT) singing and vocal production, while relatively uncommon, is readily accessible to the contemporary educator. Practitioners such as Norman Spivey, Mary Saunders Barton, Claudia Friedlander, Wendy Leborgne, and Marci Rosenberg continue to publish relevant research in the field of musical theatre voice pedagogy and have successfully identified many common MT vocal faults, their subsequent diagnoses, and their eventual corrections. Where classical methodology would suggest specific vocalises or training exercises to maintain corrected vocal posture following successful fault diagnosis, musical theatre finds itself without a relevant body of work towards which to transition. By analyzing the existing vocalise literature by means of a specialized set of parameters, including but not limited to melodic variation, rhythmic complexity, vowel utilization, and technical targeting, we have composed a set of vocalises meant specifically to address the training and conditioning of adult musical theatre voices. These vocalises target many pedagogical tenants in the musical theatre genre, including but not limited to thyroarytenoid-dominant production, twang resonance, lateral vowel formation, and “belt-mix.” By implementing these vocalises in the musical theatre voice studio, pedagogues can efficiently communicate proper musical theatre vocal posture and kinesthetic connection to their students, regardless of age or level of experience. The composition of these vocalises serves MT pedagogues on both a technical level as well as a sociological one. MT is a relative newcomer on the collegiate stage and the academization of musical theatre methodologies has been a slow and arduous process. The conflation of classical and MT techniques and training methods has long plagued the world of voice pedagogy and teachers often find themselves in positions of “cross-training,” that is, teaching students of both genres in one combined voice studio. As MT continues to establish itself on academic platforms worldwide, genre-specific literature and focused studies are both rare and invaluable. To ensure that modern students receive exacting and definitive training in their chosen fields, it becomes increasingly necessary for genres such as musical theatre to boast specified literature and a collection of musical theatre-specific vocalises only aids in this effort. This collection of musical theatre vocalises is the first of its kind and provides genre-specific studios with a basis upon which to grow healthy, balanced voices built for the harsh conditions of the modern theatre stage.

Keywords: voice pedagogy, targeted methodology, musical theatre, singing

Procedia PDF Downloads 132
117 A Stochastic Vehicle Routing Problem with Ordered Customers and Collection of Two Similar Products

Authors: Epaminondas G. Kyriakidis, Theodosis D. Dimitrakos, Constantinos C. Karamatsoukis

Abstract:

The vehicle routing problem (VRP) is a well-known problem in Operations Research and has been widely studied during the last fifty-five years. The context of the VRP is that of delivering or collecting products to or from customers who are scattered in a geographical area and have placed orders for these products. A vehicle or a fleet of vehicles start their routes from a depot and visit the customers in order to satisfy their demands. Special attention has been given to the capacitated VRP in which the vehicles have limited carrying capacity for the goods that are delivered or collected. In the present work, we present a specific capacitated stochastic vehicle routing problem which has many realistic applications. We develop and analyze a mathematical model for a specific vehicle routing problem in which a vehicle starts its route from a depot and visits N customers according to a particular sequence in order to collect from them two similar but not identical products. We name these products, product 1 and product 2. Each customer possesses items either of product 1 or product 2 with known probabilities. The number of the items of product 1 or product 2 that each customer possesses is a discrete random variable with known distribution. The actual quantity and the actual type of product that each customer possesses are revealed only when the vehicle arrives at the customer’s site. It is assumed that the vehicle has two compartments. We name these compartments, compartment 1 and compartment 2. It is assumed that compartment 1 is suitable for loading product 1 and compartment 2 is suitable for loading product 2. However, it is permitted to load items of product 1 into compartment 2 and items of product 2 into compartment 1. These actions cause costs that are due to extra labor. The vehicle is allowed during its route to return to the depot to unload the items of both products. The travel costs between consecutive customers and the travel costs between the customers and the depot are known. The objective is to find the optimal routing strategy, i.e. the routing strategy that minimizes the total expected cost among all possible strategies for servicing all customers. It is possible to develop a suitable dynamic programming algorithm for the determination of the optimal routing strategy. It is also possible to prove that the optimal routing strategy has a specific threshold-type strategy. Specifically, it is shown that for each customer the optimal actions are characterized by some critical integers. This structural result enables us to design a special-purpose dynamic programming algorithm that operates only over these strategies having this structural property. Extensive numerical results provide strong evidence that the special-purpose dynamic programming algorithm is considerably more efficient than the initial dynamic programming algorithm. Furthermore, if we consider the same problem without the assumption that the customers are ordered, numerical experiments indicate that the optimal routing strategy can be computed if N is smaller or equal to eight.

Keywords: dynamic programming, similar products, stochastic demands, stochastic preferences, vehicle routing problem

Procedia PDF Downloads 226
116 A 'Systematic Literature Review' of Specific Types of Inventory Faced by the Management of Firms

Authors: Rui Brito

Abstract:

This contribution regards a literature review of inventory management that is a relevant topic for the firms, due to its important use of capital with implications in firm’s profitability within the complexity of a more competitive and globalized world. Firms look for small inventories in order to reduce holding costs, namely opportunity cost, warehousing and handling costs, deterioration and being out of style, but larger inventories are required by some reasons, such as customer service, ordering cost, transportation cost, supplier’s payment to reduce unit costs or to take advantage of price increase in the near future, and equipment setup cost. Thus, management shall address a trade-off between small inventories and larger inventories. This literature review concerns three types of inventory (spare parts, safety stock, and vendor) whose management usually is beyond the scope of logistics. The applied methodology consisted of an online search of databases regarding scientific documents in English, namely Elsevier, Springer, Emerald, Wiley, and Taylor & Francis, but excluding books except if edited, using search engines, such as Google Scholar and B-on. The search was based on three keywords/strings (themes) which had to be included just as in the article title, suggesting themes were very relevant to the researchers. The whole search period was between 2009 and 2018 with the aim of collecting between twenty and forty studies considered relevant within each of the key words/strings specified. Documents were sorted by relevance and to prevent the exclusion of the more recent articles, based on lower quantity of citations partially due to less time to be cited in new research articles, the search period was divided into two sub-periods (2009-2015 and 2016-2018). The number of surveyed articles by theme showed a variation from 40 to 200 and the number of citations of those articles showed a wider variation from 3 to 216. Selected articles from the three themes were analyzed and the first seven of the first sub-period and the first three of the second sub-period with more citations were read in full to make a synopsis of each article. Overall, the findings show that the majority of article types were models, namely mathematical, although with different sub-types for each theme. Almost all articles suggest further studies, with some mentioning it for their own author(s), which widen the diversity of the previous research. Identified research gaps concern the use of surveys to know which are the models more used by firms, the reasons for not using the models with more performance and accuracy, and which are the satisfaction levels with the outcomes of the inventories management and its effect on the improvement of the firm’s overall performance. The review ends with the limitations and contributions of the study.

Keywords: inventory management, safety stock, spare parts inventory, vendor managed inventory

Procedia PDF Downloads 74
115 Complementing Assessment Processes with Standardized Tests: A Work in Progress

Authors: Amparo Camacho

Abstract:

ABET accredited programs must assess the development of student learning outcomes (SOs) in engineering programs. Different institutions implement different strategies for this assessment, and they are usually designed “in house.” This paper presents a proposal for including standardized tests to complement the ABET assessment model in an engineering college made up of six distinct engineering programs. The engineering college formulated a model of quality assurance in education to be implemented throughout the six engineering programs to regularly assess and evaluate the achievement of SOs in each program offered. The model uses diverse techniques and sources of data to assess student performance and to implement actions of improvement based on the results of this assessment. The model is called “Assessment Process Model” and it includes SOs A through K, as defined by ABET. SOs can be divided into two categories: “hard skills” and “professional skills” (soft skills). The first includes abilities, such as: applying knowledge of mathematics, science, and engineering and designing and conducting experiments, as well as analyzing and interpreting data. The second category, “professional skills”, includes communicating effectively, and understanding professional and ethnical responsibility. Within the Assessment Process Model, various tools were used to assess SOs, related to both “hard” as well as “soft” skills. The assessment tools designed included: rubrics, surveys, questionnaires, and portfolios. In addition to these instruments, the Engineering College decided to use tools that systematically gather consistent quantitative data. For this reason, an in-house exam was designed and implemented, based on the curriculum of each program. Even though this exam was administered during various academic periods, it is not currently considered standardized. In 2017, the Engineering College included three standardized tests: one to assess mathematical and scientific reasoning and two more to assess reading and writing abilities. With these exams, the college hopes to obtain complementary information that can help better measure the development of both hard and soft skills of students in the different engineering programs. In the first semester of 2017, the three exams were given to three sample groups of students from the six different engineering programs. Students in the sample groups were either from the first, fifth, and tenth semester cohorts. At the time of submission of this paper, the engineering college has descriptive statistical data and is working with various statisticians to have a more in-depth and detailed analysis of the sample group of students’ achievement on the three exams. The overall objective of including standardized exams in the assessment model is to identify more precisely the least developed SOs in order to define and implement educational strategies necessary for students to achieve them in each engineering program.

Keywords: assessment, hard skills, soft skills, standardized tests

Procedia PDF Downloads 253
114 Investigating the Process Kinetics and Nitrogen Gas Production in Anammox Hybrid Reactor with Special Emphasis on the Role of Filter Media

Authors: Swati Tomar, Sunil Kumar Gupta

Abstract:

Anammox is a novel and promising technology that has changed the traditional concept of biological nitrogen removal. The process facilitates direct oxidation of ammonical nitrogen under anaerobic conditions with nitrite as an electron acceptor without the addition of external carbon sources. The present study investigated the feasibility of anammox hybrid reactor (AHR) combining the dual advantages of suspended and attached growth media for biodegradation of ammonical nitrogen in wastewater. The experimental unit consisted of 4 nos. of 5L capacity AHR inoculated with mixed seed culture containing anoxic and activated sludge (1:1). The process was established by feeding the reactors with synthetic wastewater containing NH4-H and NO2-N in the ratio 1:1 at HRT (hydraulic retention time) of 1 day. The reactors were gradually acclimated to higher ammonium concentration till it attained pseudo steady state removal at a total nitrogen concentration of 1200 mg/l. During this period, the performance of the AHR was monitored at twelve different HRTs varying from 0.25-3.0 d with increasing NLR from 0.4 to 4.8 kg N/m3d. AHR demonstrated significantly higher nitrogen removal (95.1%) at optimal HRT of 1 day. Filter media in AHR contributed an additional 27.2% ammonium removal in addition to 72% reduction in the sludge washout rate. This may be attributed to the functional mechanism of filter media which acts as a mechanical sieve and reduces the sludge washout rate many folds. This enhances the biomass retention capacity of the reactor by 25%, which is the key parameter for successful operation of high rate bioreactors. The effluent nitrate concentration, which is one of the bottlenecks of anammox process was also minimised significantly (42.3-52.3 mg/L). Process kinetics was evaluated using first order and Grau-second order models. The first-order substrate removal rate constant was found as 13.0 d-1. Model validation revealed that Grau second order model was more precise and predicted effluent nitrogen concentration with least error (1.84±10%). A new mathematical model based on mass balance was developed to predict N2 gas in AHR. The mass balance model derived from total nitrogen dictated significantly higher correlation (R2=0.986) and predicted N2 gas with least error of precision (0.12±8.49%). SEM study of biomass indicated the presence of the heterogeneous population of cocci and rod shaped bacteria of average diameter varying from 1.2-1.5 mm. Owing to enhanced NRE coupled with meagre production of effluent nitrate and its ability to retain high biomass, AHR proved to be the most competitive reactor configuration for dealing with nitrogen laden wastewater.

Keywords: anammox, filter media, kinetics, nitrogen removal

Procedia PDF Downloads 357
113 Sampling and Chemical Characterization of Particulate Matter in a Platinum Mine

Authors: Juergen Orasche, Vesta Kohlmeier, George C. Dragan, Gert Jakobi, Patricia Forbes, Ralf Zimmermann

Abstract:

Underground mining poses a difficult environment for both man and machines. At more than 1000 meters underneath the surface of the earth, ores and other mineral resources are still gained by conventional and motorised mining. Adding to the hazards caused by blasting and stone-chipping, the working conditions are best described by the high temperatures of 35-40°C and high humidity, at low air exchange rates. Separate ventilation shafts lead fresh air into a mine and others lead expended air back to the surface. This is essential for humans and machines working deep underground. Nevertheless, mines are widely ramified. Thus the air flow rate at the far end of a tunnel is sensed to be close to zero. In recent years, conventional mining was supplemented by mining with heavy diesel machines. These very flat machines called Load Haul Dump (LHD) vehicles accelerate and ease work in areas favourable for heavy machines. On the other hand, they emit non-filtered diesel exhaust, which constitutes an occupational hazard for the miners. Combined with a low air exchange, high humidity and inorganic dust from the mining it leads to 'black smog' underneath the earth. This work focuses on the air quality in mines employing LHDs. Therefore we performed personal sampling (samplers worn by miners during their work), stationary sampling and aethalometer (Microaeth MA200, Aethlabs) measurements in a platinum mine in around 1000 meters under the earth’s surface. We compared areas of high diesel exhaust emission with areas of conventional mining where no diesel machines were operated. For a better assessment of health risks caused by air pollution we applied a separated gas-/particle-sampling tool (or system), with first denuder section collecting intermediate VOCs. These multi-channel silicone rubber denuders are able to trap IVOCs while allowing particles ranged from 10 nm to 1 µm in diameter to be transmitted with an efficiency of nearly 100%. The second section is represented by a quartz fibre filter collecting particles and adsorbed semi-volatile organic compounds (SVOC). The third part is a graphitized carbon black adsorber – collecting the SVOCs that evaporate from the filter. The compounds collected on these three sections were analyzed in our labs with different thermal desorption techniques coupled with gas chromatography and mass spectrometry (GC-MS). VOCs and IVOCs were measured with a Shimadzu Thermal Desorption Unit (TD20, Shimadzu, Japan) coupled to a GCMS-System QP 2010 Ultra with a quadrupole mass spectrometer (Shimadzu). The GC was equipped with a 30m, BP-20 wax column (0.25mm ID, 0.25µm film) from SGE (Australia). Filters were analyzed with In-situ derivatization thermal desorption gas chromatography time-of-flight-mass spectrometry (IDTD-GC-TOF-MS). The IDTD unit is a modified GL sciences Optic 3 system (GL Sciences, Netherlands). The results showed black carbon concentrations measured with the portable aethalometers up to several mg per m³. The organic chemistry was dominated by very high concentrations of alkanes. Typical diesel engine exhaust markers like alkylated polycyclic aromatic hydrocarbons were detected as well as typical lubrication oil markers like hopanes.

Keywords: diesel emission, personal sampling, aethalometer, mining

Procedia PDF Downloads 128
112 Adding a Degree of Freedom to Opinion Dynamics Models

Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle

Abstract:

Within agent-based modeling, opinion dynamics is the field that focuses on modeling people's opinions. In this prolific field, most of the literature is dedicated to the exploration of the two 'degrees of freedom' and how they impact the model’s properties (e.g., the average final opinion, the number of final clusters, etc.). These degrees of freedom are (1) the interaction rule, which determines how agents update their own opinion, and (2) the network topology, which defines the possible interaction among agents. In this work, we show that the third degree of freedom exists. This can be used to change a model's output up to 100% of its initial value or to transform two models (both from the literature) into each other. Since opinion dynamics models are representations of the real world, it is fundamental to understand how people’s opinions can be measured. Even for abstract models (i.e., not intended for the fitting of real-world data), it is important to understand if the way of numerically representing opinions is unique; and, if this is not the case, how the model dynamics would change by using different representations. The process of measuring opinions is non-trivial as it requires transforming real-world opinion (e.g., supporting most of the liberal ideals) to a number. Such a process is usually not discussed in opinion dynamics literature, but it has been intensively studied in a subfield of psychology called psychometrics. In psychometrics, opinion scales can be converted into each other, similarly to how meters can be converted to feet. Indeed, psychometrics routinely uses both linear and non-linear transformations of opinion scales. Here, we analyze how this transformation affects opinion dynamics models. We analyze this effect by using mathematical modeling and then validating our analysis with agent-based simulations. Firstly, we study the case of perfect scales. In this way, we show that scale transformations affect the model’s dynamics up to a qualitative level. This means that if two researchers use the same opinion dynamics model and even the same dataset, they could make totally different predictions just because they followed different renormalization processes. A similar situation appears if two different scales are used to measure opinions even on the same population. This effect may be as strong as providing an uncertainty of 100% on the simulation’s output (i.e., all results are possible). Still, by using perfect scales, we show that scales transformations can be used to perfectly transform one model to another. We test this using two models from the standard literature. Finally, we test the effect of scale transformation in the case of finite precision using a 7-points Likert scale. In this way, we show how a relatively small-scale transformation introduces both changes at the qualitative level (i.e., the most shared opinion at the end of the simulation) and in the number of opinion clusters. Thus, scale transformation appears to be a third degree of freedom of opinion dynamics models. This result deeply impacts both theoretical research on models' properties and on the application of models on real-world data.

Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics

Procedia PDF Downloads 103
111 Comparison of Titanium and Aluminum Functions as Spoilers for Dose Uniformity Achievement in Abutting Oblique Electron Fields: A Monte Carlo Simulation Study

Authors: Faranak Felfeliyan, Parvaneh Shokrani, Maryam Atarod

Abstract:

Introduction Using electron beam is widespread in radiotherapy. The main criteria in radiation therapy is to irradiate the tumor volume with maximum prescribed dose and minimum dose to vital organs around it. Using abutting fields is common in radiotherapy. The main problem in using abutting fields is dose inhomogeneity in the junction region. Electron beam divergence and lateral scattering may lead to hot and cold spots in the junction region. One solution for this problem is using of a spoiler to broaden the penumbra and uniform dose in the junction region. The goal of this research was to compare titanium and aluminum effects as a spoiler for dose uniformity achievement in the junction region of oblique electron fields with Monte Carlo simulation. Dose uniformity in the junction region depends on density, scattering power, thickness of the spoiler and the angle between two fields. Materials and Methods In this study, Monte Carlo model of Siemens Primus linear accelerator was simulated for a 5 MeV nominal energy electron beam using manufacture provided specifications. BEAMnrc and EGSnrc user code were used to simulate the treatment head in electron mode (simulation of beam model). The resulting phase space file was used as a source for dose calculations for 10×10 cm2 field size at SSD=100 cm in a 30×30×45 cm3 water phantom using DOSXYZnrc user code (dose calculations). An automatic MP3-M water phantom tank, MEPHYSTO mc2 software platform and a Semi-Flex Chamber-31010 with sensitive vol­ume of 0.125 cm3 (PTW, Freiburg, Germany) were used for dose distribution measurements. Moreover, the electron field size was 10×10 cm2 and SSD=100 cm. Validation of devel­oped beam model was done by comparing the measured and calculated depth and lateral dose distributions (verification of electron beam model). Simulation of spoilers (using SLAB compo­nent module) placed at the end of the electron applicator, was done using previously vali­dated phase space file for a 5 MeV nominal energy and 10×10 cm2 field size (simulation of spoiler). An in-house routine was developed in order to calculate the combined isodose curves re­sulting from the two simulated abutting fields (calculation of dose distribution in abutting electron fields). Results Verification of the developed 5.9 MeV elec­tron beam model was done by comparing the calculated and measured dose distributions. The maximum percentage difference between calculated and measured PDD was 1%, except for the build-up region in which the difference was 2%. The difference between calculated and measured profile was 2% at the edges of the field and less than 1% in other regions. The effect of PMMA, aluminum, titanium and chromium in dose uniformity achievement in abutting normal electron fields with equivalent thicknesses to 5mm PMMA was evaluated. Comparing R90 and uniformity index of different materials, aluminum was chosen as the optimum spoiler. Titanium has the maximum surface dose. Thus, aluminum and titanium had been chosen to use for dose uniformity achievement in oblique electron fields. Using the optimum beam spoiler, junction dose decreased from 160% to 110% for 15 degrees, from 180% to 120% for 30 degrees, from 160% to 120% for 45 degrees and from 180% to 100% for 60 degrees oblique abutting fields. Using Titanium spoiler, junction dose decreased from 160% to 120% for 15 degrees, 180% to 120% for 30 degrees, 160% to 120% for 45 degrees and 180% to 110% for 60 degrees. In addition, penumbra width for 15 degrees, without spoiler in the surface was 10 mm and was increased to 15.5 mm with titanium spoiler. For 30 degrees, from 9 mm to 15 mm, for 45 degrees from 4 mm to 6 mm and for 60 degrees, from 5 mm to 8 mm. Conclusion Using spoilers, penumbra width at the surface increased, size and depth of hot spots was decreased and dose homogeneity improved at the junc­tion of abutting electron fields. Dose at the junction region of abutting oblique fields was improved significantly by using spoiler. Maximum dose at the junction region for 15⁰, 30⁰, 45⁰ and 60⁰ was decreased about 40%, 60%, 40% and 70% respectively for Titanium and about 50%, 60%, 40% and 80% for Aluminum. Considering significantly decrease in maximum dose using titanium spoiler, unfortunately, dose distribution in the junction region was not decreased less than 110%.

Keywords: abutting fields, electron beam, radiation therapy, spoilers

Procedia PDF Downloads 144
110 Flexible Design Solutions for Complex Free form Geometries Aimed to Optimize Performances and Resources Consumption

Authors: Vlad Andrei Raducanu, Mariana Lucia Angelescu, Ion Cinca, Vasile Danut Cojocaru, Doina Raducanu

Abstract:

By using smart digital tools, such as generative design (GD) and digital fabrication (DF), problems of high actuality concerning resources optimization (materials, energy, time) can be solved and applications or products of free-form type can be created. In the new digital technology materials are active, designed in response to a set of performance requirements, which impose a total rethinking of old material practices. The article presents the design procedure key steps of a free-form architectural object - a column type one with connections to get an adaptive 3D surface, by using the parametric design methodology and by exploiting the properties of conventional metallic materials. In parametric design the form of the created object or space is shaped by varying the parameters values and relationships between the forms are described by mathematical equations. Digital parametric design is based on specific procedures, as shape grammars, Lindenmayer - systems, cellular automata, genetic algorithms or swarm intelligence, each of these procedures having limitations which make them applicable only in certain cases. In the paper the design process stages and the shape grammar type algorithm are presented. The generative design process relies on two basic principles: the modeling principle and the generative principle. The generative method is based on a form finding process, by creating many 3D spatial forms, using an algorithm conceived in order to apply its generating logic onto different input geometry. Once the algorithm is realized, it can be applied repeatedly to generate the geometry for a number of different input surfaces. The generated configurations are then analyzed through a technical or aesthetic selection criterion and finally the optimal solution is selected. Endless range of generative capacity of codes and algorithms used in digital design offers various conceptual possibilities and optimal solutions for both technical and environmental increasing demands of building industry and architecture. Constructions or spaces generated by parametric design can be specifically tuned, in order to meet certain technical or aesthetical requirements. The proposed approach has direct applicability in sustainable architecture, offering important potential economic advantages, a flexible design (which can be changed until the end of the design process) and unique geometric models of high performance.

Keywords: parametric design, algorithmic procedures, free-form architectural object, sustainable architecture

Procedia PDF Downloads 349
109 A Tutorial on Model Predictive Control for Spacecraft Maneuvering Problem with Theory, Experimentation and Applications

Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini

Abstract:

This paper discusses the recent advances and future prospects of spacecraft position and attitude control using Model Predictive Control (MPC). First, the challenges of the space missions are summarized, in particular, taking into account the errors, uncertainties, and constraints imposed by the mission, spacecraft and, onboard processing capabilities. The summary of space mission errors and uncertainties provided in categories; initial condition errors, unmodeled disturbances, sensor, and actuator errors. These previous constraints are classified into two categories: physical and geometric constraints. Last, real-time implementation capability is discussed regarding the required computation time and the impact of sensor and actuator errors based on the Hardware-In-The-Loop (HIL) experiments. The rationales behind the scenarios’ are also presented in the scope of space applications as formation flying, attitude control, rendezvous and docking, rover steering, and precision landing. The objectives of these missions are explained, and the generic constrained MPC problem formulations are summarized. Three key design elements used in MPC design: the prediction model, the constraints formulation and the objective cost function are discussed. The prediction models can be linear time invariant or time varying depending on the geometry of the orbit, whether it is circular or elliptic. The constraints can be given as linear inequalities for input or output constraints, which can be written in the same form. Moreover, the recent convexification techniques for the non-convex geometrical constraints (i.e., plume impingement, Field-of-View (FOV)) are presented in detail. Next, different objectives are provided in a mathematical framework and explained accordingly. Thirdly, because MPC implementation relies on finding in real-time the solution to constrained optimization problems, computational aspects are also examined. In particular, high-speed implementation capabilities and HIL challenges are presented towards representative space avionics. This covers an analysis of future space processors as well as the requirements of sensors and actuators on the HIL experiments outputs. The HIL tests are investigated for kinematic and dynamic tests where robotic arms and floating robots are used respectively. Eventually, the proposed algorithms and experimental setups are introduced and compared with the authors' previous work and future plans. The paper concludes with a conjecture that MPC paradigm is a promising framework at the crossroads of space applications while could be further advanced based on the challenges mentioned throughout the paper and the unaddressed gap.

Keywords: convex optimization, model predictive control, rendezvous and docking, spacecraft autonomy

Procedia PDF Downloads 88
108 Harnessing the Power of Artificial Intelligence: Advancements and Ethical Considerations in Psychological and Behavioral Sciences

Authors: Nayer Mofidtabatabaei

Abstract:

Advancements in artificial intelligence (AI) have transformed various fields, including psychology and behavioral sciences. This paper explores the diverse ways in which AI is applied to enhance research, diagnosis, therapy, and understanding of human behavior and mental health. We discuss the potential benefits and challenges associated with AI in these fields, emphasizing the ethical considerations and the need for collaboration between AI researchers and psychological and behavioral science experts. Artificial Intelligence (AI) has gained prominence in recent years, revolutionizing multiple industries, including healthcare, finance, and entertainment. One area where AI holds significant promise is the field of psychology and behavioral sciences. AI applications in this domain range from improving the accuracy of diagnosis and treatment to understanding complex human behavior patterns. This paper aims to provide an overview of the various AI applications in psychological and behavioral sciences, highlighting their potential impact, challenges, and ethical considerations. Mental Health Diagnosis AI-driven tools, such as natural language processing and sentiment analysis, can analyze large datasets of text and speech to detect signs of mental health issues. For example, chatbots and virtual therapists can provide initial assessments and support to individuals suffering from anxiety or depression. Autism Spectrum Disorder (ASD) Diagnosis AI algorithms can assist in early ASD diagnosis by analyzing video and audio recordings of children's behavior. These tools help identify subtle behavioral markers, enabling earlier intervention and treatment. Personalized Therapy AI-based therapy platforms use personalized algorithms to adapt therapeutic interventions based on an individual's progress and needs. These platforms can provide continuous support and resources for patients, making therapy more accessible and effective. Virtual Reality Therapy Virtual reality (VR) combined with AI can create immersive therapeutic environments for treating phobias, PTSD, and social anxiety. AI algorithms can adapt VR scenarios in real-time to suit the patient's progress and comfort level. Data Analysis AI aids researchers in processing vast amounts of data, including survey responses, brain imaging, and genetic information. Privacy Concerns Collecting and analyzing personal data for AI applications in psychology and behavioral sciences raise significant privacy concerns. Researchers must ensure the ethical use and protection of sensitive information. Bias and Fairness AI algorithms can inherit biases present in training data, potentially leading to biased assessments or recommendations. Efforts to mitigate bias and ensure fairness in AI applications are crucial. Transparency and Accountability AI-driven decisions in psychology and behavioral sciences should be transparent and subject to accountability. Patients and practitioners should understand how AI algorithms operate and make decisions. AI applications in psychological and behavioral sciences have the potential to transform the field by enhancing diagnosis, therapy, and research. However, these advancements come with ethical challenges that require careful consideration. Collaboration between AI researchers and psychological and behavioral science experts is essential to harness AI's full potential while upholding ethical standards and privacy protections. The future of AI in psychology and behavioral sciences holds great promise, but it must be navigated with caution and responsibility.

Keywords: artificial intelligence, psychological sciences, behavioral sciences, diagnosis and therapy, ethical considerations

Procedia PDF Downloads 38
107 Investigation of a Single Feedstock Particle during Pyrolysis in Fluidized Bed Reactors via X-Ray Imaging Technique

Authors: Stefano Iannello, Massimiliano Materazzi

Abstract:

Fluidized bed reactor technologies are one of the most valuable pathways for thermochemical conversions of biogenic fuels due to their good operating flexibility. Nevertheless, there are still issues related to the mixing and separation of heterogeneous phases during operation with highly volatile feedstocks, including biomass and waste. At high temperatures, the volatile content of the feedstock is released in the form of the so-called endogenous bubbles, which generally exert a “lift” effect on the particle itself by dragging it up to the bed surface. Such phenomenon leads to high release of volatile matter into the freeboard and limited mass and heat transfer with particles of the bed inventory. The aim of this work is to get a better understanding of the behaviour of a single reacting particle in a hot fluidized bed reactor during the devolatilization stage. The analysis has been undertaken at different fluidization regimes and temperatures to closely mirror the operating conditions of waste-to-energy processes. Beechwood and polypropylene particles were used to resemble the biomass and plastic fractions present in waste materials, respectively. The non-invasive X-ray technique was coupled to particle tracking algorithms to characterize the motion of a single feedstock particle during the devolatilization with high resolution. A high-energy X-ray beam passes through the vessel where absorption occurs, depending on the distribution and amount of solids and fluids along the beam path. A high-speed video camera is synchronised to the beam and provides frame-by-frame imaging of the flow patterns of fluids and solids within the fluidized bed up to 72 fps (frames per second). A comprehensive mathematical model has been developed in order to validate the experimental results. Beech wood and polypropylene particles have shown a very different dynamic behaviour during the pyrolysis stage. When the feedstock is fed from the bottom, the plastic material tends to spend more time within the bed than the biomass. This behaviour can be attributed to the presence of the endogenous bubbles, which drag effect is more pronounced during the devolatilization of biomass, resulting in a lower residence time of the particle within the bed. At the typical operating temperatures of thermochemical conversions, the synthetic polymer softens and melts, and the bed particles attach on its outer surface, generating a wet plastic-sand agglomerate. Consequently, this additional layer of sand may hinder the rapid evolution of volatiles in the form of endogenous bubbles, and therefore the establishment of a poor drag effect acting on the feedstock itself. Information about the mixing and segregation of solid feedstock is of prime importance for the design and development of more efficient industrial-scale operations.

Keywords: fluidized bed, pyrolysis, waste feedstock, X-ray

Procedia PDF Downloads 149
106 Modulation of Receptor-Activation Due to Hydrogen Bond Formation

Authors: Sourav Ray, Christoph Stein, Marcus Weber

Abstract:

A new class of drug candidates, initially derived from mathematical modeling of ligand-receptor interactions, activate the μ-opioid receptor (MOR) preferentially at acidic extracellular pH-levels, as present in injured tissues. This is of commercial interest because it may preclude the adverse effects of conventional MOR agonists like fentanyl, which include but are not limited to addiction, constipation, sedation, and apnea. Animal studies indicate the importance of taking the pH value of the chemical environment of MOR into account when designing new drugs. Hydrogen bonds (HBs) play a crucial role in stabilizing protein secondary structure and molecular interaction, such as ligand-protein interaction. These bonds may depend on the pH value of the chemical environment. For the MOR, antagonist naloxone and agonist [D-Ala2,N-Me-Phe4,Gly5-ol]-enkephalin (DAMGO) form HBs with ionizable residue HIS 297 at physiological pH to modulate signaling. However, such interactions were markedly reduced at acidic pH. Although fentanyl-induced signaling is also diminished at acidic pH, HBs with HIS 297 residue are not observed at either acidic or physiological pH for this strong agonist of the MOR. Molecular dynamics (MD) simulations can provide greater insight into the interaction between the ligand of interest and the HIS 297 residue. Amino acid protonation states are adjusted to the model difference in system acidity. Unbiased and unrestrained MD simulations were performed, with the ligand in the proximity of the HIS 297 residue. Ligand-receptor complexes were embedded in 1-palmitoyl-2-oleoyl-sn glycero-3-phosphatidylcholine (POPC) bilayer to mimic the membrane environment. The occurrence of HBs between the different ligands and the HIS 297 residue of MOR at acidic and physiological pH values were tracked across the various simulation trajectories. No HB formation was observed between fentanyl and HIS 297 residue at either acidic or physiological pH. Naloxone formed some HBs with HIS 297 at pH 5, but no such HBs were noted at pH 7. Interestingly, DAMGO displayed an opposite yet more pronounced HB formation trend compared to naloxone. Whereas a marginal number of HBs could be observed at even pH 5, HBs with HIS 297 were more stable and widely present at pH 7. The HB formation plays no and marginal role in the interaction of fentanyl and naloxone, respectively, with the HIS 297 residue of MOR. However, HBs play a significant role in the DAMGO and HIS 297 interaction. Post DAMGO administration, these HBs might be crucial for the remediation of opioid tolerance and restoration of opioid sensitivity. Although experimental studies concur with our observations regarding the influence of HB formation on the fentanyl and DAMGO interaction with HIS 297, the same could not be conclusively stated for naloxone. Therefore, some other supplementary interactions might be responsible for the modulation of the MOR activity by naloxone binding at pH 7 but not at pH 5. Further elucidation of the mechanism of naloxone action on the MOR could assist in the formulation of cost-effective naloxone-based treatment of opioid overdose or opioid-induced side effects.

Keywords: effect of system acidity, hydrogen bond formation, opioid action, receptor activation

Procedia PDF Downloads 155
105 The Analysis of Gizmos Online Program as Mathematics Diagnostic Program: A Story from an Indonesian Private School

Authors: Shofiayuningtyas Luftiani

Abstract:

Some private schools in Indonesia started integrating the online program Gizmos in the teaching-learning process. Gizmos was developed to supplement the existing curriculum by integrating it into the instructional programs. The program has some features using an inquiry-based simulation, in which students conduct exploration by using a worksheet while teachers use the teacher guidelines to direct and assess students’ performance In this study, the discussion about Gizmos highlights its features as the assessment media of mathematics learning for secondary school students. The discussion is based on the case study and literature review from the Indonesian context. The purpose of applying Gizmos as an assessment media refers to the diagnostic assessment. As a part of the diagnostic assessment, the teachers review the student exploration sheet, analyze particularly in the students’ difficulties and consider findings in planning future learning process. This assessment becomes important since the teacher needs the data about students’ persistent weaknesses. Additionally, this program also helps to build student’ understanding by its interactive simulation. Currently, the assessment over-emphasizes the students’ answers in the worksheet based on the provided answer keys while students perform their skill in translating the question, doing the simulation and answering the question. Whereas, the assessment should involve the multiple perspectives and sources of students’ performance since teacher should adjust the instructional programs with the complexity of students’ learning needs and styles. Consequently, the approach to improving the assessment components is selected to challenge the current assessment. The purpose of this challenge is to involve not only the cognitive diagnosis but also the analysis of skills and error. Concerning the selected setting for this diagnostic assessment that develops the combination of cognitive diagnosis, skills analysis and error analysis, the teachers should create an assessment rubric. The rubric plays the important role as the guide to provide a set of criteria for the assessment. Without the precise rubric, the teacher potentially ineffectively documents and follows up the data about students at risk of failure. Furthermore, the teachers who employ the program of Gizmos as the diagnostic assessment might encounter some obstacles. Based on the condition of assessment in the selected setting, the obstacles involve the time constrain, the reluctance of higher teaching burden and the students’ behavior. Consequently, the teacher who chooses the Gizmos with those approaches has to plan, implement and evaluate the assessment. The main point of this assessment is not in the result of students’ worksheet. However, the diagnostic assessment has the two-stage process; the process to prompt and effectively follow-up both individual weaknesses and those of the learning process. Ultimately, the discussion of Gizmos as the media of the diagnostic assessment refers to the effort to improve the mathematical learning process.

Keywords: diagnostic assessment, error analysis, Gizmos online program, skills analysis

Procedia PDF Downloads 156
104 Results concerning the University: Industry Partnership for a Research Project Implementation (MUROS) in the Romanian Program Star

Authors: Loretta Ichim, Dan Popescu, Grigore Stamatescu

Abstract:

The paper reports the collaboration between a top university from Romania and three companies for the implementation of a research project in a multidisciplinary domain, focusing on the impact and benefits both for the education and industry. The joint activities were developed under the Space Technology and Advanced Research Program (STAR), funded by the Romanian Space Agency (ROSA) for a university-industry partnership. The context was defined by linking the European Space Agency optional programs, with the development and promotion national research, with the educational and industrial capabilities in the aeronautics, security and related areas by increasing the collaboration between academic and industrial entities as well as by realizing high-level scientific production. The project name is Multisensory Robotic System for Aerial Monitoring of Critical Infrastructure Systems (MUROS), which was carried 2013-2016. The project included the University POLITEHNICA of Bucharest (coordinator) and three companies, which manufacture and market unmanned aerial systems. The project had as main objective the development of an integrated system for combined ground wireless sensor networks and UAV monitoring in various application scenarios for critical infrastructure surveillance. This included specific activities related to fundamental and applied research, technology transfer, prototype implementation and result dissemination. The core area of the contributions laid in distributed data processing and communication mechanisms, advanced image processing and embedded system development. Special focus is given by the paper to analyzing the impact the project implementation in the educational process, directly or indirectly, through the faculty members (professors and students) involved in the research team. Three main directions are discussed: a) enabling students to carry out internships at the partner companies, b) handling advanced topics and industry requirements at the master's level, c) experiments and concept validation for doctoral thesis. The impact of the research work (as the educational component) developed by the faculty members on the increasing performances of the companies’ products is highlighted. The collaboration between university and companies was well balanced both for contributions and results. The paper also presents the outcomes of the project which reveals the efficient collaboration between high education and industry: master thesis, doctoral thesis, conference papers, journal papers, technical documentation for technology transfer, prototype, and patent. The experience can provide useful practices of blending research and education within an academia-industry cooperation framework while the lessons learned represent a starting point in debating the new role of advanced research and development performing companies in association with higher education. This partnership, promoted at UE level, has a broad impact beyond the constrained scope of a single project and can develop into long-lasting collaboration while benefiting all stakeholders: students, universities and the surrounding knowledge-based economic and industrial ecosystem. Due to the exchange of experiences between the university (UPB) and the manufacturing company (AFT Design), a new project, SIMUL, under the Bridge Grant Program (Romanian executive agency UEFISCDI) was started (2016 – 2017). This project will continue the educational research for innovation on master and doctoral studies in MUROS thematic (collaborative multi-UAV application for flood detection).

Keywords: education process, multisensory robotic system, research and innovation project, technology transfer, university-industry partnership

Procedia PDF Downloads 211
103 Leveraging Digital Transformation Initiatives and Artificial Intelligence to Optimize Readiness and Simulate Mission Performance across the Fleet

Authors: Justin Woulfe

Abstract:

Siloed logistics and supply chain management systems throughout the Department of Defense (DOD) has led to disparate approaches to modeling and simulation (M&S), a lack of understanding of how one system impacts the whole, and issues with “optimal” solutions that are good for one organization but have dramatic negative impacts on another. Many different systems have evolved to try to understand and account for uncertainty and try to reduce the consequences of the unknown. As the DoD undertakes expansive digital transformation initiatives, there is an opportunity to fuse and leverage traditionally disparate data into a centrally hosted source of truth. With a streamlined process incorporating machine learning (ML) and artificial intelligence (AI), advanced M&S will enable informed decisions guiding program success via optimized operational readiness and improved mission success. One of the current challenges is to leverage the terabytes of data generated by monitored systems to provide actionable information for all levels of users. The implementation of a cloud-based application analyzing data transactions, learning and predicting future states from current and past states in real-time, and communicating those anticipated states is an appropriate solution for the purposes of reduced latency and improved confidence in decisions. Decisions made from an ML and AI application combined with advanced optimization algorithms will improve the mission success and performance of systems, which will improve the overall cost and effectiveness of any program. The Systecon team constructs and employs model-based simulations, cutting across traditional silos of data, aggregating maintenance, and supply data, incorporating sensor information, and applying optimization and simulation methods to an as-maintained digital twin with the ability to aggregate results across a system’s lifecycle and across logical and operational groupings of systems. This coupling of data throughout the enterprise enables tactical, operational, and strategic decision support, detachable and deployable logistics services, and configuration-based automated distribution of digital technical and product data to enhance supply and logistics operations. As a complete solution, this approach significantly reduces program risk by allowing flexible configuration of data, data relationships, business process workflows, and early test and evaluation, especially budget trade-off analyses. A true capability to tie resources (dollars) to weapon system readiness in alignment with the real-world scenarios a warfighter may experience has been an objective yet to be realized to date. By developing and solidifying an organic capability to directly relate dollars to readiness and to inform the digital twin, the decision-maker is now empowered through valuable insight and traceability. This type of educated decision-making provides an advantage over the adversaries who struggle with maintaining system readiness at an affordable cost. The M&S capability developed allows program managers to independently evaluate system design and support decisions by quantifying their impact on operational availability and operations and support cost resulting in the ability to simultaneously optimize readiness and cost. This will allow the stakeholders to make data-driven decisions when trading cost and readiness throughout the life of the program. Finally, sponsors are available to validate product deliverables with efficiency and much higher accuracy than in previous years.

Keywords: artificial intelligence, digital transformation, machine learning, predictive analytics

Procedia PDF Downloads 131
102 Metal-Organic Frameworks-Based Materials for Volatile Organic Compounds Sensing Applications: Strategies to Improve Sensing Performances

Authors: Claudio Clemente, Valentina Gargiulo, Alessio Occhicone, Giovanni Piero Pepe, Giovanni Ausanio, Michela Alfè

Abstract:

Volatile organic compound (VOC) emissions represent a serious risk to human health and the integrity of the ecosystems, especially at high concentrations. For this reason, it is very important to continuously monitor environmental quality and develop fast and reliable portable sensors to allow analysis on site. Chemiresistors have become promising candidates for VOC sensing as their ease of fabrication, variety of suitable sensitive materials, and simple sensing data. A chemoresistive gas sensor is a transducer that allows to measure the concentration of an analyte in the gas phase because the changes in resistance are proportional to the amount of the analyte present. The selection of the sensitive material, which interacts with the target analyte, is very important for the sensor performance. The most used VOC detection materials are metal oxides (MOx) for their rapid recovery, high sensitivity to various gas molecules, easy fabrication. Their sensing performance can be improved in terms of operating temperature, selectivity, and detection limit. Metal-organic frameworks (MOFs) have attracted a lot of attention also in the field of gas sensing due to their high porosity, high surface area, tunable morphologies, structural variety. MOFs are generated by the self-assembly of multidentate organic ligands connecting with adjacent multivalent metal nodes via strong coordination interactions, producing stable and highly ordered crystalline porous materials with well-designed structures. However, most MOFs intrinsically exhibit low electrical conductivity. To improve this property, MOFs can be combined with organic and inorganic materials in a hybrid fashion to produce composite materials or can be transformed into more stable structures. MOFs, indeed, can be employed as the precursors of metal oxides with well-designed architectures via the calcination method. The MOF-derived MOx partially preserved the original structure with high surface area and intrinsic open pores, which act as trapping centers for gas molecules, and showed a higher electrical conductivity. Core-shell heterostructures, in which the surface of a metal oxide core is completely coated by a MOF shell, forming a junction at the core-shell heterointerface, can also be synthesized. Also, nanocomposite in which MOF structures are intercalated with graphene related materials can also be produced, and the conductivity increases thanks to the high mobility of electrons of carbon materials. As MOF structures, zinc-based MOFs belonging to the ZIF family were selected in this work. Several Zn-based materials based and/or derived from MOFs were produced, structurally characterized, and arranged in a chemo resistive architecture, also exploring the potentiality of different approaches of sensing layer deposition based on PLD (pulsed laser deposition) and, in case of thermally labile materials, MAPLE (Matrix Assisted Pulsed Laser Evaporation) to enhance the adhesion to the support. The sensors were tested in a controlled humidity chamber, allowing for the possibility of varying the concentration of ethanol, a typical analyte chosen among the VOCs for a first survey. The effect of heating the chemiresistor to improve sensing performances was also explored. Future research will focus on exploring new manufacturing processes for MOF-based gas sensors with the aim to improve sensitivity, selectivity and reduce operating temperatures.

Keywords: chemiresistors, gas sensors, graphene related materials, laser deposition, MAPLE, metal-organic frameworks, metal oxides, nanocomposites, sensing performance, transduction mechanism, volatile organic compounds

Procedia PDF Downloads 26
101 Design of DNA Origami Structures Using LAMP Products as a Combined System for the Detection of Extended Spectrum B-Lactamases

Authors: Kalaumari Mayoral-Peña, Ana I. Montejano-Montelongo, Josué Reyes-Muñoz, Gonzalo A. Ortiz-Mancilla, Mayrin Rodríguez-Cruz, Víctor Hernández-Villalobos, Jesús A. Guzmán-López, Santiago García-Jacobo, Iván Licona-Vázquez, Grisel Fierros-Romero, Rosario Flores-Vallejo

Abstract:

The group B-lactamic antibiotics include some of the most frequently used small drug molecules against bacterial infections. Nevertheless, an alarming decrease in their efficacy has been reported due to the emergence of antibiotic-resistant bacteria. Infections caused by bacteria expressing extended Spectrum B-lactamases (ESBLs) are difficult to treat and account for higher morbidity and mortality rates, delayed recovery, and high economic burden. According to the Global Report on Antimicrobial Resistance Surveillance, it is estimated that mortality due to resistant bacteria will ascend to 10 million cases per year worldwide. These facts highlight the importance of developing low-cost and readily accessible detection methods of drug-resistant ESBLs bacteria to prevent their spread and promote accurate and fast diagnosis. Bacterial detection is commonly done using molecular diagnostic techniques, where PCR stands out for its high performance. However, this technique requires specialized equipment not available everywhere, is time-consuming, and has a high cost. Loop-Mediated Isothermal Amplification (LAMP) is an alternative technique that works at a constant temperature, significantly decreasing the equipment cost. It yields double-stranded DNA of several lengths with repetitions of the target DNA sequence as a product. Although positive and negative results from LAMP can be discriminated by colorimetry, fluorescence, and turbidity, there is still a large room for improvement in the point-of-care implementation. DNA origami is a technique that allows the formation of 3D nanometric structures by folding a large single-stranded DNA (scaffold) into a determined shape with the help of short DNA sequences (staples), which hybridize with the scaffold. This research aimed to generate DNA origami structures using LAMP products as scaffolds to improve the sensitivity to detect ESBLs in point-of-care diagnosis. For this study, the coding sequence of the CTM-X-15 ESBL of E. coli was used to generate the LAMP products. The set of LAMP primers were designed using PrimerExplorerV5. As a result, a target sequence of 200 nucleotides from CTM-X-15 ESBL was obtained. Afterward, eight different DNA origami structures were designed using the target sequence in the SDCadnano and analyzed with CanDo to evaluate the stability of the 3D structures. The designs were constructed minimizing the total number of staples to reduce costs and complexity for point-of-care applications. After analyzing the DNA origami designs, two structures were selected. The first one was a zig-zag flat structure, while the second one was a wall-like shape. Given the sequence repetitions in the scaffold sequence, both were able to be assembled with only 6 different staples each one, ranging between 18 to 80 nucleotides. Simulations of both structures were performed using scaffolds of different sizes yielding stable structures in all the cases. The generation of the LAMP products were tested by colorimetry and electrophoresis. The formation of the DNA structures was analyzed using electrophoresis and colorimetry. The modeling of novel detection methods through bioinformatics tools allows reliable control and prediction of results. To our knowledge, this is the first study that uses LAMP products and DNA-origami in combination to delect ESBL-producing bacterial strains, which represent a promising methodology for diagnosis in the point-of-care.

Keywords: beta-lactamases, antibiotic resistance, DNA origami, isothermal amplification, LAMP technique, molecular diagnosis

Procedia PDF Downloads 189
100 Sustainable Antimicrobial Biopolymeric Food & Biomedical Film Engineering Using Bioactive AMP-Ag+ Formulations

Authors: Eduardo Lanzagorta Garcia, Chaitra Venkatesh, Romina Pezzoli, Laura Gabriela Rodriguez Barroso, Declan Devine, Margaret E. Brennan Fournet

Abstract:

New antimicrobial interventions are urgently required to combat rising global health and medical infection challenges. Here, an innovative antimicrobial technology, providing price competitive alternatives to antibiotics and readily integratable with currently technological systems is presented. Two cutting edge antimicrobial materials, antimicrobial peptides (AMPs) and uncompromised sustained Ag+ action from triangular silver nanoplates (TSNPs) reservoirs, are merged for versatile effective antimicrobial action where current approaches fail. Antimicrobial peptides (AMPs) exist widely in nature and have recently been demonstrated for broad spectrum of activity against bacteria, viruses, and fungi. TSNP’s are highly discrete, homogenous and readily functionisable Ag+ nanoreseviors that have a proven amenability for operation within in a wide range of bio-based settings. In a design for advanced antimicrobial sustainable plastics, antimicrobial TSNPs are formulated for processing within biodegradable biopolymers. Histone H5 AMP was selected for its reported strong antimicrobial action and functionalized with the TSNP (AMP-TSNP) in a similar fashion to previously reported TSNP biofunctionalisation methods. A synergy between the propensity of biopolymers for degradation and Ag+ release combined with AMP activity provides a novel mechanism for the sustained antimicrobial action of biopolymeric thin films. Nanoplates are transferred from aqueous phase to an organic solvent in order to facilitate integration within hydrophobic polymers. Extrusion is used in combination with calendering rolls to create thin polymerc film where the nanoplates are embedded onto the surface. The resultant antibacterial functional films are suitable to be adapted for food packing and biomedical applications. TSNP synthesis were synthesized by adapting a previously reported seed mediated approach. TSNP synthesis was scaled up for litre scale batch production and subsequently concentrated to 43 ppm using thermally controlled H2O removal. Nanoplates were transferred from aqueous phase to an organic solvent in order to facilitate integration within hydrophobic polymers. This was acomplised by functionalizing the TSNP with thiol terminated polyethylene glycol and using centrifugal force to transfer them to chloroform. Polycaprolactone (PCL) and Polylactic acid (PLA) were individually processed through extrusion, TSNP and AMP-TSNP solutions were sprayed onto the polymer immediately after exiting the dye. Calendering rolls were used to disperse and incorporate TSNP and TSNP-AMP onto the surface of the extruded films. Observation of the characteristic blue colour confirms the integrity of the TSNP within the films. Antimicrobial tests were performed by incubating Gram + and Gram – strains with treated and non-treated films, to evaluate if bacterial growth was reduced due to the presence of the TSNP. The resulting films successfully incorporated TSNP and AMP-TSNP. Reduced bacterial growth was observed for both Gram + and Gram – strains for both TSNP and AMP-TSNP compared with untreated films indicating antimicrobial action. The largest growth reduction was observed for AMP-TSNP treated films demonstrating the additional antimicrobial activity due to the presence of the AMPs. The potential of this technology to impede bacterial activity in food industry and medical surfaces will forge new confidence in the battle against antibiotic resistant bacteria, serving to greatly inhibit infections and facilitate patient recovery.

Keywords: antimicrobial, biodegradable, peptide, polymer, nanoparticle

Procedia PDF Downloads 92
99 A Research Review on the Presence of Pesticide Residues in Apples Carried out in Poland in the Years 1980-2015

Authors: Bartosz Piechowicz, Stanislaw Sadlo, Przemyslaw Grodzicki, Magdalena Podbielska

Abstract:

Apples are popular fruits. They are eaten freshly and/or after processing. For instance Golden Delicious is an apple variety commonly used in production of foods for babies and toddlers. It is no wonder that complex analyses of the pesticide residue levels in those fruits have been carried out since eighties, and continued for the next years up to now. The results obtained were presented, usually as a teamwork, at the scientific sessions organised by the (IOR) Institute of Plant Protection-National Research Institute in Poznań and published in Scientific Works of the Institute (now Progress in Plant Protection/ Postępy w Ochronie Roślin) or Journal of Plant Protection Research, and in many non-periodical publications. These reports included studies carried out by IOR Laboratories in Poznań, Sośnicowice, Rzeszów and Bialystok. First detailed studies on the presence of pesticide residues in apple fruits by the laboratory in Rzeszów were published in 1991 in the article entitled 'The presence of pesticides in apples of late varieties from the area of south-eastern Poland in the years 1986-1989', in Annals of National Institute of Hygiene in Warsaw. These surveys gave the scientific base for business contacts between the Polish company Alima and the American company Gerber. At the beginning of XXI century, in Poland, systematic and complex studies on the deposition of pesticide residues in apples were initiated. First of all, the levels of active ingredients of plant protection products applied against storage diseases at 2-3 weeks before the harvest were determined. It is known that the above mentioned substances usually generate the highest residue levels. Also, the assessment of the fungicide residues in apples during their storage in controlled atmosphere and during their processing was carried out. Taking into account the need of actualisation the Maximum Residue Levels of pesticides, in force in Poland and in other European countries, and rationalisation of the ways of their determination, a lot of field tests on the behaviour of more important fungicides on the mature fruits just before their harvesting, were carried out. A rate of their disappearance and mathematical equation that showed the relationship between the residue level of any substance and the used dose, have been determined. The two parameters have allowed to evaluate the Maximum Residue Levels (MRLs) of pesticides, which were in force at that time, and to propose a coherent model of their determination in respect to the new substances. The obtained results were assessed in terms of the health risk for adult consumers and children, and to such determination of terms of treatment that mature apples could meet the rigorous level of 0.01 mg/kg.

Keywords: apple, disappearance, health risk, MRL, pesticide residue, research

Procedia PDF Downloads 244
98 A Multifactorial Algorithm to Automate Screening of Drug-Induced Liver Injury Cases in Clinical and Post-Marketing Settings

Authors: Osman Turkoglu, Alvin Estilo, Ritu Gupta, Liliam Pineda-Salgado, Rajesh Pandey

Abstract:

Background: Hepatotoxicity can be linked to a variety of clinical symptoms and histopathological signs, posing a great challenge in the surveillance of suspected drug-induced liver injury (DILI) cases in the safety database. Additionally, the majority of such cases are rare, idiosyncratic, highly unpredictable, and tend to demonstrate unique individual susceptibility; these qualities, in turn, lend to a pharmacovigilance monitoring process that is often tedious and time-consuming. Objective: Develop a multifactorial algorithm to assist pharmacovigilance physicians in identifying high-risk hepatotoxicity cases associated with DILI from the sponsor’s safety database (Argus). Methods: Multifactorial selection criteria were established using Structured Query Language (SQL) and the TIBCO Spotfire® visualization tool, via a combination of word fragments, wildcard strings, and mathematical constructs, based on Hy’s law criteria and pattern of injury (R-value). These criteria excluded non-eligible cases from monthly line listings mined from the Argus safety database. The capabilities and limitations of these criteria were verified by comparing a manual review of all monthly cases with system-generated monthly listings over six months. Results: On an average, over a period of six months, the algorithm accurately identified 92% of DILI cases meeting established criteria. The automated process easily compared liver enzyme elevations with baseline values, reducing the screening time to under 15 minutes as opposed to multiple hours exhausted using a cognitively laborious, manual process. Limitations of the algorithm include its inability to identify cases associated with non-standard laboratory tests, naming conventions, and/or incomplete/incorrectly entered laboratory values. Conclusions: The newly developed multifactorial algorithm proved to be extremely useful in detecting potential DILI cases, while heightening the vigilance of the drug safety department. Additionally, the application of this algorithm may be useful in identifying a potential signal for DILI in drugs not yet known to cause liver injury (e.g., drugs in the initial phases of development). This algorithm also carries the potential for universal application, due to its product-agnostic data and keyword mining features. Plans for the tool include improving it into a fully automated application, thereby completely eliminating a manual screening process.

Keywords: automation, drug-induced liver injury, pharmacovigilance, post-marketing

Procedia PDF Downloads 126
97 Quantum Mechanics as A Limiting Case of Relativistic Mechanics

Authors: Ahmad Almajid

Abstract:

The idea of unifying quantum mechanics with general relativity is still a dream for many researchers, as physics has only two paths, no more. Einstein's path, which is mainly based on particle mechanics, and the path of Paul Dirac and others, which is based on wave mechanics, the incompatibility of the two approaches is due to the radical difference in the initial assumptions and the mathematical nature of each approach. Logical thinking in modern physics leads us to two problems: - In quantum mechanics, despite its success, the problem of measurement and the problem of wave function interpretation is still obscure. - In special relativity, despite the success of the equivalence of rest-mass and energy, but at the speed of light, the fact that the energy becomes infinite is contrary to logic because the speed of light is not infinite, and the mass of the particle is not infinite too. These contradictions arise from the overlap of relativistic and quantum mechanics in the neighborhood of the speed of light, and in order to solve these problems, one must understand well how to move from relativistic mechanics to quantum mechanics, or rather, to unify them in a way different from Dirac's method, in order to go along with God or Nature, since, as Einstein said, "God doesn't play dice." From De Broglie's hypothesis about wave-particle duality, Léon Brillouin's definition of the new proper time was deduced, and thus the quantum Lorentz factor was obtained. Finally, using the Euler-Lagrange equation, we come up with new equations in quantum mechanics. In this paper, the two problems in modern physics mentioned above are solved; it can be said that this new approach to quantum mechanics will enable us to unify it with general relativity quite simply. If the experiments prove the validity of the results of this research, we will be able in the future to transport the matter at speed close to the speed of light. Finally, this research yielded three important results: 1- Lorentz quantum factor. 2- Planck energy is a limited case of Einstein energy. 3- Real quantum mechanics, in which new equations for quantum mechanics match and exceed Dirac's equations, these equations have been reached in a completely different way from Dirac's method. These equations show that quantum mechanics is a limited case of relativistic mechanics. At the Solvay Conference in 1927, the debate about quantum mechanics between Bohr, Einstein, and others reached its climax, while Bohr suggested that if particles are not observed, they are in a probabilistic state, then Einstein said his famous claim ("God does not play dice"). Thus, Einstein was right, especially when he didn't accept the principle of indeterminacy in quantum theory, although experiments support quantum mechanics. However, the results of our research indicate that God really does not play dice; when the electron disappears, it turns into amicable particles or an elastic medium, according to the above obvious equations. Likewise, Bohr was right also, when he indicated that there must be a science like quantum mechanics to monitor and study the motion of subatomic particles, but the picture in front of him was blurry and not clear, so he resorted to the probabilistic interpretation.

Keywords: lorentz quantum factor, new, planck’s energy as a limiting case of einstein’s energy, real quantum mechanics, new equations for quantum mechanics

Procedia PDF Downloads 57