Search results for: structurational model of technology (Orlikowski)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22514

Search results for: structurational model of technology (Orlikowski)

12434 Comparison of Titanium and Aluminum Functions as Spoilers for Dose Uniformity Achievement in Abutting Oblique Electron Fields: A Monte Carlo Simulation Study

Authors: Faranak Felfeliyan, Parvaneh Shokrani, Maryam Atarod

Abstract:

Introduction Using electron beam is widespread in radiotherapy. The main criteria in radiation therapy is to irradiate the tumor volume with maximum prescribed dose and minimum dose to vital organs around it. Using abutting fields is common in radiotherapy. The main problem in using abutting fields is dose inhomogeneity in the junction region. Electron beam divergence and lateral scattering may lead to hot and cold spots in the junction region. One solution for this problem is using of a spoiler to broaden the penumbra and uniform dose in the junction region. The goal of this research was to compare titanium and aluminum effects as a spoiler for dose uniformity achievement in the junction region of oblique electron fields with Monte Carlo simulation. Dose uniformity in the junction region depends on density, scattering power, thickness of the spoiler and the angle between two fields. Materials and Methods In this study, Monte Carlo model of Siemens Primus linear accelerator was simulated for a 5 MeV nominal energy electron beam using manufacture provided specifications. BEAMnrc and EGSnrc user code were used to simulate the treatment head in electron mode (simulation of beam model). The resulting phase space file was used as a source for dose calculations for 10×10 cm2 field size at SSD=100 cm in a 30×30×45 cm3 water phantom using DOSXYZnrc user code (dose calculations). An automatic MP3-M water phantom tank, MEPHYSTO mc2 software platform and a Semi-Flex Chamber-31010 with sensitive vol­ume of 0.125 cm3 (PTW, Freiburg, Germany) were used for dose distribution measurements. Moreover, the electron field size was 10×10 cm2 and SSD=100 cm. Validation of devel­oped beam model was done by comparing the measured and calculated depth and lateral dose distributions (verification of electron beam model). Simulation of spoilers (using SLAB compo­nent module) placed at the end of the electron applicator, was done using previously vali­dated phase space file for a 5 MeV nominal energy and 10×10 cm2 field size (simulation of spoiler). An in-house routine was developed in order to calculate the combined isodose curves re­sulting from the two simulated abutting fields (calculation of dose distribution in abutting electron fields). Results Verification of the developed 5.9 MeV elec­tron beam model was done by comparing the calculated and measured dose distributions. The maximum percentage difference between calculated and measured PDD was 1%, except for the build-up region in which the difference was 2%. The difference between calculated and measured profile was 2% at the edges of the field and less than 1% in other regions. The effect of PMMA, aluminum, titanium and chromium in dose uniformity achievement in abutting normal electron fields with equivalent thicknesses to 5mm PMMA was evaluated. Comparing R90 and uniformity index of different materials, aluminum was chosen as the optimum spoiler. Titanium has the maximum surface dose. Thus, aluminum and titanium had been chosen to use for dose uniformity achievement in oblique electron fields. Using the optimum beam spoiler, junction dose decreased from 160% to 110% for 15 degrees, from 180% to 120% for 30 degrees, from 160% to 120% for 45 degrees and from 180% to 100% for 60 degrees oblique abutting fields. Using Titanium spoiler, junction dose decreased from 160% to 120% for 15 degrees, 180% to 120% for 30 degrees, 160% to 120% for 45 degrees and 180% to 110% for 60 degrees. In addition, penumbra width for 15 degrees, without spoiler in the surface was 10 mm and was increased to 15.5 mm with titanium spoiler. For 30 degrees, from 9 mm to 15 mm, for 45 degrees from 4 mm to 6 mm and for 60 degrees, from 5 mm to 8 mm. Conclusion Using spoilers, penumbra width at the surface increased, size and depth of hot spots was decreased and dose homogeneity improved at the junc­tion of abutting electron fields. Dose at the junction region of abutting oblique fields was improved significantly by using spoiler. Maximum dose at the junction region for 15⁰, 30⁰, 45⁰ and 60⁰ was decreased about 40%, 60%, 40% and 70% respectively for Titanium and about 50%, 60%, 40% and 80% for Aluminum. Considering significantly decrease in maximum dose using titanium spoiler, unfortunately, dose distribution in the junction region was not decreased less than 110%.

Keywords: abutting fields, electron beam, radiation therapy, spoilers

Procedia PDF Downloads 155
12433 Development of Method for Recovery of Nickel from Aqueous Solution Using 2-Hydroxy-5-Nonyl- Acetophenone Oxime Impregnated on Activated Charcoal

Authors: A. O. Adebayo, G. A. Idowu, F. Odegbemi

Abstract:

Investigations on the recovery of nickel from aqueous solution using 2-hydroxy-5-nonyl- acetophenone oxime (LIX-84I) impregnated on activated charcoal was carried out. The LIX-84I was impregnated onto the pores of dried activated charcoal by dry method and optimum conditions for different equilibrium parameters (pH, adsorbent dosage, extractant concentration, agitation time and temperature) were determined using a simulated solution of nickel. The kinetics and adsorption isotherm studies were also evaluated. It was observed that the efficiency of recovery with LIX-84I impregnated on charcoal was dependent on the pH of the aqueous solution as there was little or no recovery at pH below 4. However, as the pH was raised, percentage recovery increases and peaked at pH 5.0. The recovery was found to increase with temperature up to 60ºC. Also it was observed that nickel adsorbed onto the loaded charcoal best at a lower concentration (0.1M) of the extractant when compared with higher concentrations. Similarly, a moderately low dosage (1 g) of the adsorbent showed better recovery than larger dosages. These optimum conditions were used to recover nickel from the leachate of Ni-MH batteries dissolved with sulphuric acid, and a 99.6% recovery was attained. Adsorption isotherm studies showed that the equilibrium data fitted best to Temkin model, with a negative value of constant, b (-1.017 J/mol) and a high correlation coefficient, R² of 0.9913. Kinetic studies showed that the adsorption process followed a pseudo-second order model. Thermodynamic parameter values (∆G⁰, ∆H⁰, and ∆S⁰) showed that the adsorption was endothermic and spontaneous. The impregnated charcoal appreciably recovered nickel using a relatively smaller volume of extractant than what is required in solvent extraction. Desorption studies showed that the loaded charcoal is reusable for three times, and so might be economical for nickel recovery from waste battery.

Keywords: charcoal, impregnated, LIX-84I, nickel, recovery

Procedia PDF Downloads 137
12432 Development of Gamma Configuration Stirling Engine Using Polymeric and Metallic Additive Manufacturing for Education

Authors: J. Otegui, M. Agirre, M. A. Cestau, H. Erauskin

Abstract:

The increasing accessibility of mid-priced additive manufacturing (AM) systems offers a chance to incorporate this technology into engineering instruction. Furthermore, AM facilitates the creation of manufacturing designs, enhancing the efficiency of various machines. One example of these machines is the Stirling cycle engine. It encompasses complex thermodynamic machinery, revealing various aspects of mechanical engineering expertise upon closer inspection. In this publication, the application of Stirling Engines fabricated via additive manufacturing techniques will be showcased for the purpose of instructive design and product enhancement. The performance of a Stirling engine's conventional displacer and piston is contrasted. The outcomes of utilizing this instructional tool in teaching are demonstrated.

Keywords: 3D printing, additive manufacturing, mechanical design, stirling engine.

Procedia PDF Downloads 39
12431 The Strategies to Improve the Pedestrian System in the Context of Old Aging

Authors: Yuxiao Jiang, Dong Ma, Mengyu Zhan, Yingxia Yun

Abstract:

China now is entering the phase of old aging and the aging speed is on acceleration. The proportion of the aged citizens in the urban areas is getting larger. Traveling on foot is one of the main travel methods for the old, but the bad walking environment and unsystematic pedestrian system cause inconvenience to the old who travel on foot. The paper analyzes the behavioral characteristics and the spatial preferences of the elderly group as well as the new traffic demands of them, finding out that some problems exist in the current pedestrian system. Thus, the paper proposes strategies in the areas of planning and design, and engineering technology so as to promote the traffic environment and perfect the pedestrian system for the old people.

Keywords: old aging, pedestrian system, perfection strategies, travel characteristics, future demand

Procedia PDF Downloads 381
12430 Simulation, Design, and 3D Print of Novel Highly Integrated TEG Device with Improved Thermal Energy Harvest Efficiency

Authors: Jaden Lu, Olivia Lu

Abstract:

Despite the remarkable advancement of solar cell technology, the challenge of optimizing total solar energy harvest efficiency persists, primarily due to significant heat loss. This excess heat not only diminishes solar panel output efficiency but also curtails its operational lifespan. A promising approach to address this issue is the conversion of surplus heat into electricity. In recent years, there is growing interest in the use of thermoelectric generators (TEG) as a potential solution. The integration of efficient TEG devices holds the promise of augmenting overall energy harvest efficiency while prolonging the longevity of solar panels. While certain research groups have proposed the integration of solar cells and TEG devices, a substantial gap between conceptualization and practical implementation remains, largely attributed to low thermal energy conversion efficiency of TEG devices. To bridge this gap and meet the requisites of practical application, a feasible strategy involves the incorporation of a substantial number of p-n junctions within a confined unit volume. However, the manufacturing of high-density TEG p-n junctions presents a formidable challenge. The prevalent solution often leads to large device sizes to accommodate enough p-n junctions, consequently complicating integration with solar cells. Recently, the adoption of 3D printing technology has emerged as a promising solution to address this challenge by fabricating high-density p-n arrays. Despite this, further developmental efforts are necessary. Presently, the primary focus is on the 3D printing of vertically layered TEG devices, wherein p-n junction density remains constrained by spatial limitations and the constraints of 3D printing techniques. This study proposes a novel device configuration featuring horizontally arrayed p-n junctions of Bi2Te3. The structural design of the device is subjected to simulation through the Finite Element Method (FEM) within COMSOL Multiphysics software. Various device configurations are simulated to identify optimal device structure. Based on the simulation results, a new TEG device is fabricated utilizing 3D Selective laser melting (SLM) printing technology. Fusion 360 facilitates the translation of the COMSOL device structure into a 3D print file. The horizontal design offers a unique advantage, enabling the fabrication of densely packed, three-dimensional p-n junction arrays. The fabrication process entails printing a singular row of horizontal p-n junctions using the 3D SLM printing technique in a single layer. Subsequently, successive rows of p-n junction arrays are printed within the same layer, interconnected by thermally conductive copper. This sequence is replicated across multiple layers, separated by thermal insulating glass. This integration created in a highly compact three-dimensional TEG device with high density p-n junctions. The fabricated TEG device is then attached to the bottom of the solar cell using thermal glue. The whole device is characterized, with output data closely matching with COMSOL simulation results. Future research endeavors will encompass the refinement of thermoelectric materials. This includes the advancement of high-resolution 3D printing techniques tailored to diverse thermoelectric materials, along with the optimization of material microstructures such as porosity and doping. The objective is to achieve an optimal and highly integrated PV-TEG device that can substantially increase the solar energy harvest efficiency.

Keywords: thermoelectric, finite element method, 3d print, energy conversion

Procedia PDF Downloads 56
12429 Contactless Electromagnetic Detection of Stress Fluctuations in Steel Elements

Authors: M. A. García, J. Vinolas, A. Hernando

Abstract:

Steel is nowadays one of the most important structural materials because of its outstanding mechanical properties. Therefore, in order to look for a sustainable economic model and to optimize the use of extensive resources, new methods to monitor and prevent failure of steel-based facilities are required. The classical mechanical tests, as for instance building tasting, are invasive and destructive. Moreover, for facilities where the steel element is embedded, (as reinforced concrete) these techniques are directly non applicable. Hence, non-invasive monitoring techniques to prevent failure, without altering the structural properties of the elements are required. Among them, electromagnetic methods are particularly suitable for non-invasive inspection of the mechanical state of steel-based elements. The magnetoelastic coupling effects induce a modification of the electromagnetic properties of an element upon applied stress. Since most steels are ferromagnetic because of their large Fe content, it is possible to inspect their structure and state in a non-invasive way. We present here a distinct electromagnetic method for contactless evaluation of internal stress in steel-based elements. In particular, this method relies on measuring the magnetic induction between two coils with the steel specimen in between them. We found that the alteration of electromagnetic properties of the steel specimen induced by applied stress-induced changes in the induction allowed us to detect stress well below half of the elastic limit of the material. Hence, it represents an outstanding non-invasive method to prevent failure in steel-based facilities. We here describe the theoretical model, present experimental results to validate it and finally we show a practical application for detection of stress and inhomogeneities in train railways.

Keywords: magnetoelastic, magnetic induction, mechanical stress, steel

Procedia PDF Downloads 30
12428 The Amorphousness of the Exposure Sphere

Authors: Nipun Ansal

Abstract:

People guard their beliefs and opinions with their lives. Beliefs that they’ve formed over a period of time, and can go to any lengths to defy, desist from, resist and negate any outward stimulus that has the potential to shake them. Cognitive dissonance is term used to describe it in theory. And every human being, in order to defend himself from cognitive dissonance applies 4 rings of defense viz. Selective Exposure, Selective Perception, Selective Attention, and Selective Retention. This paper is a discursive analysis on how the onslaught of social media, complete with its intrusive weaponry, has amorphized the external ring of defense: the selective exposure. The stimulus-response model of communication is one of the most inherent model that encompasses communication behaviours of children and elderly, individual and masses, humans and animals alike. The paper deliberates on how information bombardment through the uncontrollable channels of the social media, Facebook and Twitter in particular, have dismantled our outer sphere of exposure, leading users online to a state of constant dissonance, and thus feeding impulsive action-taking. It applies case study method citing an example to corroborate how knowledge generation has given in to the information overload and the effect it has on decision making. With stimulus increasing in number of encounters, opinion formation precedes knowledge because of the increased demand of participation and decrease in time for the information to permeate from the outer sphere of exposure to the sphere of retention, which of course, is through perception and attention. This paper discusses the challenge posed by this fleeting, stimulus rich, peer-dominated media on the traditional models of communication and meaning-generation.

Keywords: communication, discretion, exposure, social media, stimulus

Procedia PDF Downloads 399
12427 The Basketball Show in the North of France: When the NBA Globalized Culture Meets the Local Carnival Culture

Authors: David Sudre

Abstract:

Today, the National Basketball Association (NBA) is the cultural model of reference for most of the French basketball community stakeholders (players, coaches, team and league managers). In addition to the strong impact it has on how this sport is played and perceived, the NBA also influences the ways professional basketball shows are organized in France (within the Jeep Elite league). The objective of this research is to see how and to what extent the NBA show, as a globalized cultural product, disrupts Jeep Elite's professional basketball cultural codes in the organization of its shows. The article will aim at questioning the intercultural phenomenon at stake in sports cultures in France through the prism of the basketball match. This angle will shed some light on the underlying relationships between local and global elements. The results of this research come from a one-year survey conducted in a small town in northern France, Le Portel, where the Etoile Sportive Saint Michel (ESSM), a Jeep Elite's club, operates. An ethnographic approach was favored. It entailed many participating observations and semi-directive interviews with supporters of the ESSM Le Portel. Through this ethnographic work with the team's fan groups (before the games, during the games and after the games), it was possible for the researchers to understand better all the cultural and identity issues that play out in the "Cauldron," the basketball arena of the ESSM Le Portel. The results demonstrate, at first glance, that many basketball events organized in France are copied from the American model. It seems difficult not to try to imitate the American reference that the NBA represents, whether it be at the French All-Star Game or a Jeep Elite Game at Le Portel. In this case, an acculturation process seems to occur, not only in the way people play but also in the creation of the show (cheerleaders, animations, etc.). However, this American culture of globalized basketball, although re-appropriated, is also being modified by the members of ESSM Le Portel within their locality. Indeed, they juggle between their culture of origin and their culture of reference to build their basketball show within their sociocultural environment. In this way, Le Portel managers and supporters introduce elements that are characteristic of their local culture into the show, such as carnival customs and celebrations, two ingredients that fully contribute to the creation of their identity. Ultimately, in this context of "glocalization," this research will ascertain, on the one hand, that the identity of French basketball becomes harder to outline, and, on the other hand, that the "Cauldron" turns out to be a place to preserve (fantasized) local identities, or even a place of (unconscious) resistance to the dominant model of American basketball culture.

Keywords: basketball, carnival, culture, globalization, identity, show, sport, supporters.

Procedia PDF Downloads 145
12426 Using the SMT Solver to Minimize the Latency and to Optimize the Number of Cores in an NoC-DSP Architectures

Authors: Imen Amari, Kaouther Gasmi, Asma Rebaya, Salem Hasnaoui

Abstract:

The problem of scheduling and mapping data flow applications on multi-core architectures is notoriously difficult. This difficulty is related to the rapid evaluation of Telecommunication and multimedia systems accompanied by a rapid increase of user requirements in terms of latency, execution time, consumption, energy, etc. Having an optimal scheduling on multi-cores DSP (Digital signal Processors) platforms is a challenging task. In this context, we present a novel technic and algorithm in order to find a valid schedule that optimizes the key performance metrics particularly the Latency. Our contribution is based on Satisfiability Modulo Theories (SMT) solving technologies which is strongly driven by the industrial applications and needs. This paper, describe a scheduling module integrated in our proposed Workflow which is advised to be a successful approach for programming the applications based on NoC-DSP platforms. This workflow transform automatically a Simulink model to a synchronous dataflow (SDF) model. The automatic transformation followed by SMT solver scheduling aim to minimize the final latency and other software/hardware metrics in terms of an optimal schedule. Also, finding the optimal numbers of cores to be used. In fact, our proposed workflow taking as entry point a Simulink file (.mdl or .slx) derived from embedded Matlab functions. We use an approach which is based on the synchronous and hierarchical behavior of both Simulink and SDF. Whence, results of running the scheduler which exist in the Workflow mentioned above using our proposed SMT solver algorithm refinements produce the best possible scheduling in terms of latency and numbers of cores.

Keywords: multi-cores DSP, scheduling, SMT solver, workflow

Procedia PDF Downloads 278
12425 Modeling of Particle Reduction and Volatile Compounds Profile during Chocolate Conching by Electronic Nose and Genetic Programming (GP) Based System

Authors: Juzhong Tan, William Kerr

Abstract:

Conching is one critical procedure in chocolate processing, where special flavors are developed, and smooth mouse feel the texture of the chocolate is developed due to particle size reduction of cocoa mass and other additives. Therefore, determination of the particle size and volatile compounds profile of cocoa bean is important for chocolate manufacturers to ensure the quality of chocolate products. Currently, precise particle size measurement is usually done by laser scattering which is expensive and inaccessible to small/medium size chocolate manufacturers. Also, some other alternatives, such as micrometer and microscopy, can’t provide good measurements and provide little information. Volatile compounds analysis of cocoa during conching, has similar problems due to its high cost and limited accessibility. In this study, a self-made electronic nose system consists of gas sensors (TGS 800 and 2000 series) was inserted to a conching machine and was used to monitoring the volatile compound profile of chocolate during the conching. A model correlated volatile compounds profiles along with factors including the content of cocoa, sugar, and the temperature during the conching to particle size of chocolate particles by genetic programming was established. The model was used to predict the particle size reduction of chocolates with different cocoa mass to sugar ratio (1:2, 1:1, 1.5:1, 2:1) at 8 conching time (15min, 30min, 1h, 1.5h, 2h, 4h, 8h, and 24h). And the predictions were compared to laser scattering measurements of the same chocolate samples. 91.3% of the predictions were within the range of later scatting measurement ± 5% deviation. 99.3% were within the range of later scatting measurement ± 10% deviation.

Keywords: cocoa bean, conching, electronic nose, genetic programming

Procedia PDF Downloads 244
12424 Reconstruction of Visual Stimuli Using Stable Diffusion with Text Conditioning

Authors: ShyamKrishna Kirithivasan, Shreyas Battula, Aditi Soori, Richa Ramesh, Ramamoorthy Srinath

Abstract:

The human brain, among the most complex and mysterious aspects of the body, harbors vast potential for extensive exploration. Unraveling these enigmas, especially within neural perception and cognition, delves into the realm of neural decoding. Harnessing advancements in generative AI, particularly in Visual Computing, seeks to elucidate how the brain comprehends visual stimuli observed by humans. The paper endeavors to reconstruct human-perceived visual stimuli using Functional Magnetic Resonance Imaging (fMRI). This fMRI data is then processed through pre-trained deep-learning models to recreate the stimuli. Introducing a new architecture named LatentNeuroNet, the aim is to achieve the utmost semantic fidelity in stimuli reconstruction. The approach employs a Latent Diffusion Model (LDM) - Stable Diffusion v1.5, emphasizing semantic accuracy and generating superior quality outputs. This addresses the limitations of prior methods, such as GANs, known for poor semantic performance and inherent instability. Text conditioning within the LDM's denoising process is handled by extracting text from the brain's ventral visual cortex region. This extracted text undergoes processing through a Bootstrapping Language-Image Pre-training (BLIP) encoder before it is injected into the denoising process. In conclusion, a successful architecture is developed that reconstructs the visual stimuli perceived and finally, this research provides us with enough evidence to identify the most influential regions of the brain responsible for cognition and perception.

Keywords: BLIP, fMRI, latent diffusion model, neural perception.

Procedia PDF Downloads 60
12423 Prediction of Covid-19 Cases and Current Situation of Italy and Its Different Regions Using Machine Learning Algorithm

Authors: Shafait Hussain Ali

Abstract:

Since its outbreak in China, the Covid_19 19 disease has been caused by the corona virus SARS N coyote 2. Italy was the first Western country to be severely affected, and the first country to take drastic measures to control the disease. In start of December 2019, the sudden outbreaks of the Coronary Virus Disease was caused by a new Corona 2 virus (SARS-CO2) of acute respiratory syndrome in china city Wuhan. The World Health Organization declared the epidemic a public health emergency of international concern on January 30, 2020,. On February 14, 2020, 49,053 laboratory-confirmed deaths and 1481 deaths have been reported worldwide. The threat of the disease has forced most of the governments to implement various control measures. Therefore it becomes necessary to analyze the Italian data very carefully, in particular to investigates and to find out the present condition and the number of infected persons in the form of positive cases, death, hospitalized or some other features of infected persons will clear in simple form. So used such a model that will clearly shows the real facts and figures and also understandable to every readable person which can get some real benefit after reading it. The model used must includes(total positive cases, current positive cases, hospitalized patients, death, recovered peoples frequency rates ) all features that explains and clear the wide range facts in very simple form and helpful to administration of that country.

Keywords: machine learning tools and techniques, rapid miner tool, Naive-Bayes algorithm, predictions

Procedia PDF Downloads 98
12422 Advancing UAV Operations with Hybrid Mobile Network and LoRa Communications

Authors: Annika J. Meyer, Tom Piechotta

Abstract:

Unmanned Aerial Vehicles (UAVs) have increasingly become vital tools in various applications, including surveillance, search and rescue, and environmental monitoring. One common approach to ensure redundant communication systems when flying beyond visual line of sight is for UAVs to employ multiple mobile data modems by different providers. Although widely adopted, this approach suffers from several drawbacks, such as high costs, added weight and potential increases in signal interference. In light of these challenges, this paper proposes a communication framework intermeshing mobile networks and LoRa (Long Range) technology—a low-power, long-range communication protocol. LoRaWAN (Long Range Wide Area Network) is commonly used in Internet of Things applications, relying on stationary gateways and Internet connectivity. This paper, however, utilizes the underlying LoRa protocol, taking advantage of the protocol’s low power and long-range capabilities while ensuring efficiency and reliability. Conducted in collaboration with the Potsdam Fire Department, the implementation of mobile network technology in combination with the LoRa protocol in small UAVs (take-off weight < 0.4 kg), specifically designed for search and rescue and area monitoring missions, is explored. This research aims to test the viability of LoRa as an additional redundant communication system during UAV flights as well as its intermeshing with the primary, mobile network-based controller. The methodology focuses on direct UAV-to-UAV and UAV-to-ground communications, employing different spreading factors optimized for specific operational scenarios—short-range for UAV-to-UAV interactions and long-range for UAV-to-ground commands. This explored use case also dramatically reduces one of the major drawbacks of LoRa communication systems, as a line of sight between the modules is necessary for reliable data transfer. Something that UAVs are uniquely suited to provide, especially when deployed as a swarm. Additionally, swarm deployment may enable UAVs that have lost contact with their primary network to reestablish their connection through another, better-situated UAV. The experimental setup involves multiple phases of testing, starting with controlled environments to assess basic communication capabilities and gradually advancing to complex scenarios involving multiple UAVs. Such a staged approach allows for meticulous adjustment of parameters and optimization of the communication protocols to ensure reliability and effectiveness. Furthermore, due to the close partnership with the Fire Department, the real-world applicability of the communication system is assured. The expected outcomes of this paper include a detailed analysis of LoRa's performance as a communication tool for UAVs, focusing on aspects such as signal integrity, range, and reliability under different environmental conditions. Additionally, the paper seeks to demonstrate the cost-effectiveness and operational efficiency of using a single type of communication technology that reduces UAV payload and power consumption. By shifting from traditional cellular network communications to a more robust and versatile cellular and LoRa-based system, this research has the potential to significantly enhance UAV capabilities, especially in critical applications where reliability is paramount. The success of this paper could pave the way for broader adoption of LoRa in UAV communications, setting a new standard for UAV operational communication frameworks.

Keywords: LoRa communication protocol, mobile network communication, UAV communication systems, search and rescue operations

Procedia PDF Downloads 32
12421 Creating Futures: Using Fictive Scripting Methods for Institutional Strategic Planning

Authors: Christine Winberg, James Garraway

Abstract:

Many key university documents, such as vision and mission statements and strategic plans, are aspirational and future-oriented. There is a wide range of future-oriented methods that are used in planning applications, ranging from mathematical modelling to expert opinions. Many of these methods have limitations, and planners using these tools might, for example, make the technical-rational assumption that their plans will unfold in a logical and inevitable fashion, thus underestimating the many complex forces that are at play in planning for an unknown future. This is the issue that this study addresses. The overall project aim was to assist a new university of technology in developing appropriate responses to its social responsibility, graduate employability and research missions in its strategic plan. The specific research question guiding the research activities and approach was: how might the use of innovative future-oriented planning tools enable or constrain a strategic planning process? The research objective was to engage collaborating groups in the use of an innovative tool to develop and assess future scenarios, for the purpose of developing deeper understandings of possible futures and their challenges. The scenario planning tool chosen was ‘fictive scripting’, an analytical technique derived from Technology Forecasting and Innovation Studies. Fictive scripts are future projections that also take into account the present shape of the world and current developments. The process thus began with a critical diagnosis of the present, highlighting its tensions and frictions. The collaborative groups then developed fictive scripts, each group producing a future scenario that foregrounded different institutional missions, their implications and possible consequences. The scripts were analyzed with a view to identifying their potential contribution to the university’s strategic planning exercise. The unfolding fictive scripts revealed a number of insights in terms of unexpected benefits, unexpected challenges, and unexpected consequences. These insights were not evident in previous strategic planning exercises. The contribution that this study offers is to show how better choices can be made and potential pitfalls avoided through a systematic foresight exercise. When universities develop strategic planning documents, they are looking into the future. In this paper it is argued that the use of appropriate tools for future-oriented exercises, can help planners to understand more fully what achieving desired outcomes might entail, what challenges might be encountered, and what unexpected consequences might ensue.

Keywords: fictive scripts, scenarios, strategic planning, technological forecasting

Procedia PDF Downloads 116
12420 Artificial Intelligence in Bioscience: The Next Frontier

Authors: Parthiban Srinivasan

Abstract:

With recent advances in computational power and access to enough data in biosciences, artificial intelligence methods are increasingly being used in drug discovery research. These methods are essentially a series of advanced statistics based exercises that review the past to indicate the likely future. Our goal is to develop a model that accurately predicts biological activity and toxicity parameters for novel compounds. We have compiled a robust library of over 150,000 chemical compounds with different pharmacological properties from literature and public domain databases. The compounds are stored in simplified molecular-input line-entry system (SMILES), a commonly used text encoding for organic molecules. We utilize an automated process to generate an array of numerical descriptors (features) for each molecule. Redundant and irrelevant descriptors are eliminated iteratively. Our prediction engine is based on a portfolio of machine learning algorithms. We found Random Forest algorithm to be a better choice for this analysis. We captured non-linear relationship in the data and formed a prediction model with reasonable accuracy by averaging across a large number of randomized decision trees. Our next step is to apply deep neural network (DNN) algorithm to predict the biological activity and toxicity properties. We expect the DNN algorithm to give better results and improve the accuracy of the prediction. This presentation will review all these prominent machine learning and deep learning methods, our implementation protocols and discuss these techniques for their usefulness in biomedical and health informatics.

Keywords: deep learning, drug discovery, health informatics, machine learning, toxicity prediction

Procedia PDF Downloads 350
12419 Ultrasonic Degradation of Acephate: Effects of Operating Parameters

Authors: Naina Deshmukh

Abstract:

With the wide production, consumption, and disposal of pesticides in the world, the concerns over their human and environmental health impacts are rapidly growing. Among developing treatment technologies, Ultrasonication, as an emerging and promising technology for the removal of pesticides in the aqueous environment, has attracted the attention of many researchers in recent years. The degradation of acephate in aqueous solutions was investigated under the influence of ultrasound irradiation (20 kHz) in the presence of heterogeneous catalysts titanium dioxide (TiO2) and Zinc oxide (ZnO). The influence of various factors such as amount of catalyst (0.25, 0.5, 0.75, 1.0, 1.25 g/l), initial acephate concentration (100, 200, 300, 400 mg/l), and pH (3, 5, 7, 9, 11) were studied. The optimum catalyst dose was found to be 1 g/l of TiO2 and 1.25 g/l of ZnO for acephate at 100 mg/l, respectively. The maximum percentage degradation of acephate was observed at pH 11 for catalysts TiO2 and ZnO, respectively.

Keywords: ultrasonic degradation, acephate, TiO2, ZnO, heterogeneous catalyst

Procedia PDF Downloads 45
12418 Predictive Modelling of Aircraft Component Replacement Using Imbalanced Learning and Ensemble Method

Authors: Dangut Maren David, Skaf Zakwan

Abstract:

Adequate monitoring of vehicle component in other to obtain high uptime is the goal of predictive maintenance, the major challenge faced by businesses in industries is the significant cost associated with a delay in service delivery due to system downtime. Most of those businesses are interested in predicting those problems and proactively prevent them in advance before it occurs, which is the core advantage of Prognostic Health Management (PHM) application. The recent emergence of industry 4.0 or industrial internet of things (IIoT) has led to the need for monitoring systems activities and enhancing system-to-system or component-to- component interactions, this has resulted to a large generation of data known as big data. Analysis of big data represents an increasingly important, however, due to complexity inherently in the dataset such as imbalance classification problems, it becomes extremely difficult to build a model with accurate high precision. Data-driven predictive modeling for condition-based maintenance (CBM) has recently drowned research interest with growing attention to both academics and industries. The large data generated from industrial process inherently comes with a different degree of complexity which posed a challenge for analytics. Thus, imbalance classification problem exists perversely in industrial datasets which can affect the performance of learning algorithms yielding to poor classifier accuracy in model development. Misclassification of faults can result in unplanned breakdown leading economic loss. In this paper, an advanced approach for handling imbalance classification problem is proposed and then a prognostic model for predicting aircraft component replacement is developed to predict component replacement in advanced by exploring aircraft historical data, the approached is based on hybrid ensemble-based method which improves the prediction of the minority class during learning, we also investigate the impact of our approach on multiclass imbalance problem. We validate the feasibility and effectiveness in terms of the performance of our approach using real-world aircraft operation and maintenance datasets, which spans over 7 years. Our approach shows better performance compared to other similar approaches. We also validate our approach strength for handling multiclass imbalanced dataset, our results also show good performance compared to other based classifiers.

Keywords: prognostics, data-driven, imbalance classification, deep learning

Procedia PDF Downloads 162
12417 The Effect of Dendrobium nobile Lindl. Alkaloids on the Blood Glucose and Amyloid Precursor Protein Metabolic Pathways in Db/Db Mice

Authors: Juan Huang, Nanqu Huang, Jingshan Shi, Yu Qiu

Abstract:

Objectives: There are pathophysiological connections between type 2 diabetes mellitus (T2DM) and Alzheimer's disease (AD), and research on drugs with hypoglycemic and beta-amyloid (Aβ)-clearing effects have great therapeutic potential for AD. Dendrobium nobile Lindl. Alkaloids (DNLA) as one of the active compounds of Dendrobium nobile Lindl. In this study, we attempted to verify the hypoglycemic effect and investigate the effects of DNLA on the amyloid precursor protein (APP) metabolic pathway of the hippocampus in db/db mice. Methods: 4-weeks-old male C57BL/KsJ mice were the control group. And the same age and sexuality db/db mice were: model, DNLA-L (20 mg/kg), DNLA-M (40 mg/kg), and DNLA-H (80 mg/kg). After, mice were treated with different concentrations of DNLA for 17 weeks. The fasting blood glucose (FBG) was detected by glucose oxidase assay every week from the 4th to last week. The protein expression of β-amyloid 1-42 (Aβ1-42), β-site amyloid precursor protein-cleaving enzyme 1 (BACE1), and APP were examined by Western blotting. Results: The concentration of FBG and the protein expression of Aβ1-42, BACE1, and APP were increased in the hippocampus of the model group. Moreover, DNLA not only significantly decreased the concentration of FBG but also reduced the protein expressions of Aβ1-42, BACE1 and APP in the hippocampus of db/db mice in a dose-dependent manner. Conclusions: DNLA can decrease the protein expressions of Aβ1-42 in the hippocampus of db/db mice, and the mechanism may be involved in the APP metabolic pathway.

Keywords: Alzheimer's disease, type 2 diabetes mellitus, β-site amyloid precursor protein-cleaving enzyme 1, traditional Chinese medicines, beta-amyloid

Procedia PDF Downloads 227
12416 Connecting MRI Physics to Glioma Microenvironment: Comparing Simulated T2-Weighted MRI Models of Fixed and Expanding Extracellular Space

Authors: Pamela R. Jackson, Andrea Hawkins-Daarud, Cassandra R. Rickertsen, Kamala Clark-Swanson, Scott A. Whitmire, Kristin R. Swanson

Abstract:

Glioblastoma Multiforme (GBM), the most common primary brain tumor, often presents with hyperintensity on T2-weighted or T2-weighted fluid attenuated inversion recovery (T2/FLAIR) magnetic resonance imaging (MRI). This hyperintensity corresponds with vasogenic edema, however there are likely many infiltrating tumor cells within the hyperintensity as well. While MRIs do not directly indicate tumor cells, MRIs do reflect the microenvironmental water abnormalities caused by the presence of tumor cells and edema. The inherent heterogeneity and resulting MRI features of GBMs complicate assessing disease response. To understand how hyperintensity on T2/FLAIR MRI may correlate with edema in the extracellular space (ECS), a multi-compartmental MRI signal equation which takes into account tissue compartments and their associated volumes with input coming from a mathematical model of glioma growth that incorporates edema formation was explored. The reasonableness of two possible extracellular space schema was evaluated by varying the T2 of the edema compartment and calculating the possible resulting T2s in tumor and peripheral edema. In the mathematical model, gliomas were comprised of vasculature and three tumor cellular phenotypes: normoxic, hypoxic, and necrotic. Edema was characterized as fluid leaking from abnormal tumor vessels. Spatial maps of tumor cell density and edema for virtual tumors were simulated with different rates of proliferation and invasion and various ECS expansion schemes. These spatial maps were then passed into a multi-compartmental MRI signal model for generating simulated T2/FLAIR MR images. Individual compartments’ T2 values in the signal equation were either from literature or estimated and the T2 for edema specifically was varied over a wide range (200 ms – 9200 ms). T2 maps were calculated from simulated images. T2 values based on simulated images were evaluated for regions of interest (ROIs) in normal appearing white matter, tumor, and peripheral edema. The ROI T2 values were compared to T2 values reported in literature. The expanding scheme of extracellular space is had T2 values similar to the literature calculated values. The static scheme of extracellular space had a much lower T2 values and no matter what T2 was associated with edema, the intensities did not come close to literature values. Expanding the extracellular space is necessary to achieve simulated edema intensities commiserate with acquired MRIs.

Keywords: extracellular space, glioblastoma multiforme, magnetic resonance imaging, mathematical modeling

Procedia PDF Downloads 226
12415 A Study of Cloud Computing Solution for Transportation Big Data Processing

Authors: Ilgin Gökaşar, Saman Ghaffarian

Abstract:

The need for fast processed big data of transportation ridership (eg., smartcard data) and traffic operation (e.g., traffic detectors data) which requires a lot of computational power is incontrovertible in Intelligent Transportation Systems. Nowadays cloud computing is one of the important subjects and popular information technology solution for data processing. It enables users to process enormous measure of data without having their own particular computing power. Thus, it can also be a good selection for transportation big data processing as well. This paper intends to examine how the cloud computing can enhance transportation big data process with contrasting its advantages and disadvantages, and discussing cloud computing features.

Keywords: big data, cloud computing, Intelligent Transportation Systems, ITS, traffic data processing

Procedia PDF Downloads 453
12414 Securing Web Servers by the Intrusion Detection System (IDS)

Authors: Yousef Farhaoui

Abstract:

An IDS is a tool which is used to improve the level of security. We present in this paper different architectures of IDS. We will also discuss measures that define the effectiveness of IDS and the very recent works of standardization and homogenization of IDS. At the end, we propose a new model of IDS called BiIDS (IDS Based on the two principles of detection) for securing web servers and applications by the Intrusion Detection System (IDS).

Keywords: intrusion detection, architectures, characteristic, tools, security, web server

Procedia PDF Downloads 408
12413 Transitioning towards a Circular Economy in the Textile Industry: Approaches to Address Environmental Challenges

Authors: Mozhdeh Khalili Kordabadi

Abstract:

Textiles play a vital role in human life, particularly in the form of clothing. However, the alarming rate at which textiles end up in landfills presents a significant environmental risk. With approximately one garbage truck per second being filled with discarded textiles, urgent measures are required to mitigate this trend. Governments and responsible organizations are calling upon various stakeholders to shift from a linear economy to a circular economy model in the textile industry. This article highlights several key approaches that can be undertaken to address this pressing issue. These approaches include the creation of renewable raw material sources, rethinking production processes, maximizing the use and reuse of textile products, implementing reproduction and recycling strategies, exploring redistribution to new markets, and finding innovative means to extend the lifespan of textiles. By adopting these strategies, the textile industry can contribute to a more sustainable and environmentally friendly future. Introduction: Textiles, particularly clothing, are essential to human existence. However, the rapid accumulation of textiles in landfills poses a significant threat to the environment. This article explores the urgent need for the textile industry to transition from a linear economy model to a circular economy model. The linear model, characterized by the creation, use, and disposal of textiles, is unsustainable in the long term. By adopting a circular economy approach, the industry can minimize waste, reduce environmental impact, and promote sustainable practices. This article outlines key approaches that can be undertaken to drive this transition. Approaches to Address Environmental Challenges: Creation of Renewable Raw Materials Sources: Exploring and promoting the use of renewable and sustainable raw materials, such as organic cotton, hemp, and recycled fibers, can significantly reduce the environmental footprint of textile production. Rethinking Production Processes: Implementing cleaner production techniques, optimizing resource utilization, and minimizing waste generation are crucial steps in reducing the environmental impact of textile manufacturing. Maximizing Use and Reuse of Textile Products: Encouraging consumers to prolong the lifespan of textile products through proper care, maintenance, and repair services can reduce the frequency of disposal and promote a culture of sustainability. Reproduction and Recycling Strategies: Investing in innovative technologies and infrastructure to enable efficient reproduction and recycling of textiles can close the loop and minimize waste generation. Redistribution of Textiles to New Markets: Exploring opportunities to redistribute textiles to new and parallel markets, such as resale platforms, can extend their lifecycle and prevent premature disposal. Improvising Means to Extend Textile Lifespan: Encouraging design practices that prioritize durability, versatility, and timeless aesthetics can contribute to prolonging the lifespan of textiles. Conclusion: The textile industry must urgently transition from a linear economy to a circular economy model to mitigate the adverse environmental impact caused by textile waste. By implementing the outlined approaches, such as sourcing renewable raw materials, rethinking production processes, promoting reuse and recycling, exploring new markets, and extending the lifespan of textiles, stakeholders can work together to create a more sustainable and environmentally friendly textile industry. These measures require collective action and collaboration between governments, organizations, manufacturers, and consumers to drive positive change and safeguard the planet for future generations.

Keywords: textiles, circular economy, environmental challenges, renewable raw materials, production processes, reuse, recycling, redistribution, textile lifespan extension.

Procedia PDF Downloads 86
12412 Investigation of Fusion Zone Microstructures in Plasma Arc Welding of Austenitic Stainless Steel (SS-304L) with Low Carbon Steel (A-36) with or without Filler Alloy

Authors: Shan-e-Fatima, Mushtaq Khan, Syed Imran Hussian

Abstract:

Plasma arc welding technology is used for welding SS-304L with A-36. Two different optimize butt welded joints were produced by using austenitic filler alloy E-309L and with direct fusion at 45 A, 2mm/sec by keeping plasma gas flow rate at 0.5LPM. Microstructure analysis of the weld bead was carried out. The results reveal complex heterogeneous microstructure in austenitic base filler alloy sample where as full martensite was found in directly fused sample.

Keywords: fusion zone microstructure, stainless steel, low carbon steel, plasma arc welding

Procedia PDF Downloads 565
12411 Diagnostics via Biophysical Resistotrons

Authors: Matt Vellkorn, Mara Sarinski

Abstract:

The field of advanced diagnostics is a very rapidly changing one. A new technology that has not been fully used yet are resistotrons. A resistotron is a physical device thatis used to detect the presence of low energy alpha particles. It has been used for many years in nuclear physics as an alpha particle detector. Since they are used in nuclear physics, they have to be accurate. They have to be able to differentiate between alpha particles and other types of radiation. The resistotrons are primarily used for safety. They are used in areas where people or animals can get exposed to radiation. A typical example is in the treatment of nuclear waste. As it is with any nuclear physics instrument, a resistotron has to be very accurate and reliable. In the past, the instrument was very expensive because they were made out of copper. Today, they are made out of brass. The main difference is that brass is much less expensive than copper.

Keywords: biosensors, resistotrons, biophysics, diagnostics

Procedia PDF Downloads 112
12410 Evaluation of Soil Erosion Risk and Prioritization for Implementation of Management Strategies in Morocco

Authors: Lahcen Daoudi, Fatima Zahra Omdi, Abldelali Gourfi

Abstract:

In Morocco, as in most Mediterranean countries, water scarcity is a common situation because of low and unevenly distributed rainfall. The expansions of irrigated lands, as well as the growth of urban and industrial areas and tourist resorts, contribute to an increase of water demand. Therefore in the 1960s Morocco embarked on an ambitious program to increase the number of dams to boost water retention capacity. However, the decrease in the capacity of these reservoirs caused by sedimentation is a major problem; it is estimated at 75 million m3/year. Dams and reservoirs became unusable for their intended purposes due to sedimentation in large rivers that result from soil erosion. Soil erosion presents an important driving force in the process affecting the landscape. It has become one of the most serious environmental problems that raised much interest throughout the world. Monitoring soil erosion risk is an important part of soil conservation practices. The estimation of soil loss risk is the first step for a successful control of water erosion. The aim of this study is to estimate the soil loss risk and its spatial distribution in the different fields of Morocco and to prioritize areas for soil conservation interventions. The approach followed is the Revised Universal Soil Loss Equation (RUSLE) using remote sensing and GIS, which is the most popular empirically based model used globally for erosion prediction and control. This model has been tested in many agricultural watersheds in the world, particularly for large-scale basins due to the simplicity of the model formulation and easy availability of the dataset. The spatial distribution of the annual soil loss was elaborated by the combination of several factors: rainfall erosivity, soil erodability, topography, and land cover. The average annual soil loss estimated in several basins watershed of Morocco varies from 0 to 50t/ha/year. Watersheds characterized by high-erosion-vulnerability are located in the North (Rif Mountains) and more particularly in the Central part of Morocco (High Atlas Mountains). This variation of vulnerability is highly correlated to slope variation which indicates that the topography factor is the main agent of soil erosion within these basin catchments. These results could be helpful for the planning of natural resources management and for implementing sustainable long-term management strategies which are necessary for soil conservation and for increasing over the projected economic life of the dam implemented.

Keywords: soil loss, RUSLE, GIS-remote sensing, watershed, Morocco

Procedia PDF Downloads 449
12409 Applying Failure Modes and Effect Analysis Concept in a Global Software Development Process

Authors: Camilo Souza, Lidia Melo, Fernanda Terra, Francisco Caio, Marcelo Reis

Abstract:

SIDIA is a research and development (R&D) institute that belongs to Samsung’s global software development process. The SIDIA’s Model Team (MT) is a part of Samsung’s Mobile Division Area, which is responsible for the development of Android releases embedded in Samsung mobile devices. Basically, in this software development process, the kickoff occurs in some strategic countries (e.g., South Korea) where some software requirements are applied and the initial software tests are performed. When the software achieves a more mature level, a new branch is derived, and the development continues in subsidiaries from other strategic countries (e.g., SIDIA-Brazil). However, even in the newly created branches, there are several interactions between developers from different nationalities in order to fix bugs reported during test activities, apply some specific requirements from partners and develop new features as well. Despite the GSD strategy contributes to improving software development, some challenges are also introduced as well. In this paper, we share the initial results about the application of the failure modes and effect analysis (FMEA) concept in the software development process followed by the SIDIA’s model team. The main goal was to identify and mitigate the process potential failures through the application of recommended actions. The initial results show that the application of the FMEA concept allows us to identify the potential failures in our GSD process as well as to propose corrective actions to mitigate them. Finally, FMEA encouraged members of different teams to take actions that contribute to improving our GSD process.

Keywords: global software development, potential failures, FMEA, recommended actions

Procedia PDF Downloads 215
12408 Humoral and Cytokine Responses to Major Human Cytomegalovirus Antigens in Mouse Model

Authors: Sahar Essa, Hussain A. Safar, Raj Raghupathy

Abstract:

Human cytomegalovirus (CMV) continues to be a source of severe complications in immunologically immature and immunocompromised hosts. Effective CMV vaccines that help diminish CMV disease in transplant patients and avoid congenital infection are of great importance. Though the exact roles of defense mechanisms are unidentified, viral-specific antibodies and cytokine responses are known to be involved in controlling CMV infections. CMV envelope glycoprotein B (UL55/gB), matrix proteins (UL83/pp65, UL99/pp28, UL32/pp150), and assembly protein UL80a/pp38 are known to be targets of antiviral immune responses. We immunized mice intraperitoneally with these five CMV-related proteins (commercial) for their ability to induce specific antibody responses (in-house immunoassay) and cytokine production (commercial assay) in a mouse model. We observed a significant CMV-antigen-specific antibody response to pp38 and pp65 (E/C ˃2.0, p˂0.001). Mice immunized with pp38 had significantly higher concentrations of GM-CSF, IFN-α, IL-2 IL-4, IL-5, and IL-17A (p˂0.05). Mice immunized with pp65 showed significantly higher concentrations of GM-CSF, IFN-γ, IL-2 IL-4, IL-10, IL-12, IL-17A, and TNF-α. Th1 to Th2 cytokines ratios revealed a Th1 cytokine bias in mice immunized with pp38, pp65, pp150, and gB. We suggest that stimulation with multiple CMV-related proteins, which include pp38, pp65, and gB antigens, will allow both humoral and cellular immune responses to be efficiently activated, thus serving as appropriate CMV antigens for future vaccines.

Keywords: cytomegalovirus, UL99/pp28, UL80a/pp38, UL83/pp65, UL32/pp150, UL55/gB, CMV-antigen-specific antibody, CMV antigen-specific cytokine responses

Procedia PDF Downloads 73
12407 The Internationalization of Capital Market Influencing Debt Sustainability's Impact on the Growth of the Nigerian Economy

Authors: Godwin Chigozie Okpara, Eugine Iheanacho

Abstract:

The paper set out to assess the sustainability of debt in the Nigerian economy. Precisely, it sought to determine the level of debt sustainability and its impact on the growth of the economy; whether internationalization of capital market has positively influenced debt sustainability’s impact on economic growth; and to ascertain the direction of causality between external debt sustainability and the growth of GDP. In the light of these objectives, ratio analysis was employed for the determination of debt sustainability. Our findings revealed that the periods 1986 – 1994 and 1999 – 2004 were periods of severe unsustainable borrowing. The unit root test showed that the variables of the growth model were integrated of order one, I(1) and the cointegration test provided evidence for long run stability. Considering the dawn of internationalization of capital market, the researcher employed the structural break approach using Chow Breakpoint test on the vector error correction model (VECM). The result of VECM showed that debt sustainability, measured by debt to GDP ratio exerts negative and significant impact on the growth of the economy while debt burden measured by debt-export ratio and debt service export ratio are negative though insignificant on the growth of GDP. The Cho test result indicated that internationalization of capital market has no significant effect on the debt overhang impact on the growth of the Economy. The granger causality test indicates a feedback effect from economic growth to debt sustainability growth indicators. On the bases of these findings, the researchers made some necessary recommendations which if followed religiously will go a long way to ameliorating debt burdens and engendering economic growth.

Keywords: debt sustainability, internalization, capital market, cointegration, chow test

Procedia PDF Downloads 424
12406 A Low-Area Fully-Reconfigurable Hardware Design of Fast Fourier Transform System for 3GPP-LTE Standard

Authors: Xin-Yu Shih, Yue-Qu Liu, Hong-Ru Chou

Abstract:

This paper presents a low-area and fully-reconfigurable Fast Fourier Transform (FFT) hardware design for 3GPP-LTE communication standard. It can fully support 32 different FFT sizes, up to 2048 FFT points. Besides, a special processing element is developed for making reconfigurable computing characteristics possible, while first-in first-out (FIFO) scheduling scheme design technique is proposed for hardware-friendly FIFO resource arranging. In a synthesis chip realization via TSMC 40 nm CMOS technology, the hardware circuit only occupies core area of 0.2325 mm2 and dissipates 233.5 mW at maximal operating frequency of 250 MHz.

Keywords: reconfigurable, fast Fourier transform (FFT), single-path delay feedback (SDF), 3GPP-LTE

Procedia PDF Downloads 269
12405 Thermodynamics of Water Condensation on an Aqueous Organic-Coated Aerosol Aging via Chemical Mechanism

Authors: Yuri S. Djikaev

Abstract:

A large subset of aqueous aerosols can be initially (immediately upon formation) coated with various organic amphiphilic compounds whereof the hydrophilic moieties are attached to the aqueous aerosol core while the hydrophobic moieties are exposed to the air thus forming a hydrophobic coating thereupon. We study the thermodynamics of water condensation on such an aerosol whereof the hydrophobic organic coating is being concomitantly processed by chemical reactions with atmospheric reactive species. Such processing (chemical aging) enables the initially inert aerosol to serve as a nucleating center for water condensation. The most probable pathway of such aging involves atmospheric hydroxyl radicals that abstract hydrogen atoms from hydrophobic moieties of surface organics (first step), the resulting radicals being quickly oxidized by ubiquitous atmospheric oxygen molecules to produce surface-bound peroxyl radicals (second step). Taking these two reactions into account, we derive an expression for the free energy of formation of an aqueous droplet on an organic-coated aerosol. The model is illustrated by numerical calculations. The results suggest that the formation of aqueous cloud droplets on such aerosols is most likely to occur via Kohler activation rather than via nucleation. The model allows one to determine the threshold parameters necessary for their Kohler activation. Numerical results also corroborate previous suggestions that one can neglect some details of aerosol chemical composition in investigating aerosol effects on climate.

Keywords: aqueous aerosols, organic coating, chemical aging, cloud condensation nuclei, Kohler activation, cloud droplets

Procedia PDF Downloads 384