Search results for: structural equation modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9005

Search results for: structural equation modeling

1295 Long-Term Modal Changes in International Traffic - Modelling Exercise

Authors: Tomasz Komornicki

Abstract:

The primary aim of the presentation is to try to model border traffic and, at the same time to explain on which economic variables the intensity of border traffic depended in the long term. For this purpose, long series of traffic data on the Polish borders were used. Models were estimated for three variants of explanatory variables: a) for total arrivals and departures (total movement of Poles and foreigners), b) for arrivals and departures of Poles, and c) for arrivals and departures of foreigners. Each of the defined explanatory variables in the models appeared as the logarithm of the natural number of persons. Data from 1994-2017 were used for modeling (for internal Schengen borders for the years 1994-2007). Information on the number of people arriving in and leaving Poland was collected for a total of 303 border crossings. On the basis of the analyses carried out, it was found that one of the main factors determining border traffic is generally differences in the level of economic development (GDP) and the condition of the economy (level of unemployment) and the degree of border permeability. Also statistically significant for border traffic are differences in the prices of goods (fuels, tobacco, and alcohol products) and services (mainly basic ones, e.g., hairdressing services). Such a relationship exists mainly on the eastern border (border traffic determined largely by differences in the prices of goods) and on the border with Germany (in the first analysed period, border traffic was determined mainly by the prices of goods, later - after Poland's accession to the EU and the Schengen area - also by the prices of services). The models also confirmed differences in the set of factors shaping the volume and structure of border traffic on the Polish borders resulting from general geopolitical conditions, with the year 2007 being an important caesura, after which the classical population mobility factors became visible. The results obtained were additionally related to changes in traffic that occurred as a result of the CPOVID-19 pandemic and as a result of the Russian aggression against Ukraine.

Keywords: border, modal structure, transport, Ukraine

Procedia PDF Downloads 115
1294 A Proper Continuum-Based Reformulation of Current Problems in Finite Strain Plasticity

Authors: Ladislav Écsi, Roland Jančo

Abstract:

Contemporary multiplicative plasticity models assume that the body's intermediate configuration consists of an assembly of locally unloaded neighbourhoods of material particles that cannot be reassembled together to give the overall stress-free intermediate configuration since the neighbourhoods are not necessarily compatible with each other. As a result, the plastic deformation gradient, an inelastic component in the multiplicative split of the deformation gradient, cannot be integrated, and the material particle moves from the initial configuration to the intermediate configuration without a position vector and a plastic displacement field when plastic flow occurs. Such behaviour is incompatible with the continuum theory and the continuum physics of elastoplastic deformations, and the related material models can hardly be denoted as truly continuum-based. The paper presents a proper continuum-based reformulation of current problems in finite strain plasticity. It will be shown that the incompatible neighbourhoods in real material are modelled by the product of the plastic multiplier and the yield surface normal when the plastic flow is defined in the current configuration. The incompatible plastic factor can also model the neighbourhoods as the solution of the system of differential equations whose coefficient matrix is the above product when the plastic flow is defined in the intermediate configuration. The incompatible tensors replace the compatible spatial plastic velocity gradient in the former case or the compatible plastic deformation gradient in the latter case in the definition of the plastic flow rule. They act as local imperfections but have the same position vector as the compatible plastic velocity gradient or the compatible plastic deformation gradient in the definitions of the related plastic flow rules. The unstressed intermediate configuration, the unloaded configuration after the plastic flow, where the residual stresses have been removed, can always be calculated by integrating either the compatible plastic velocity gradient or the compatible plastic deformation gradient. However, the corresponding plastic displacement field becomes permanent with both elastic and plastic components. The residual strains and stresses originate from the difference between the compatible plastic/permanent displacement field gradient and the prescribed incompatible second-order tensor characterizing the plastic flow in the definition of the plastic flow rule, which becomes an assignment statement rather than an equilibrium equation. The above also means that the elastic and plastic factors in the multiplicative split of the deformation gradient are, in reality, gradients and that there is no problem with the continuum physics of elastoplastic deformations. The formulation is demonstrated in a numerical example using the regularized Mooney-Rivlin material model and modified equilibrium statements where the intermediate configuration is calculated, whose analysis results are compared with the identical material model using the current equilibrium statements. The advantages and disadvantages of each formulation, including their relationship with multiplicative plasticity, are also discussed.

Keywords: finite strain plasticity, continuum formulation, regularized Mooney-Rivlin material model, compatibility

Procedia PDF Downloads 123
1293 Physical Characterization of SnO₂ Films Prepared by the Rheotaxial Growth and Thermal Oxidation (RGTO) Method

Authors: A. Kabir, D. Boulainine, I. Bouanane, N. Benslim, B. Boudjema, C. Sedrati

Abstract:

SnO₂ is an n-type semiconductor with a direct gap of about 3.6 eV. It is largely used in several domains such as nanocrystalline photovoltaic cells. Due to its interesting physic-chemical properties, this material was elaborated in thin film forms using different deposition techniques. It was found that SnO₂ properties were directly affected by the deposition method parameters. In this work, the RGTO method (Rheotaxial Growth and Thermal Oxidation) was used to deposit elaborate SnO₂ thin films. This technique consists on thermal oxidation of the Sn films deposited onto a substrate heated to a temperature close to Sn melting point (232°C). Such process allows the preparation of high porosity tin oxide films which are very suitable for the gas sensing. The films structural, morphological and optical properties pre and post thermal oxidation were studied using X-ray diffraction (XRD), scanning electron microscopy (SEM), UV-Visible spectroscopy and Fourier transform infrared spectroscopy (FTIR) respectively. XRD patterns showed a polycrystalline structure of the cassiterite phase of SnO₂. The grain growth was found affected by the oxidation temperature. This grain size evolution was confronted to existing grain growth models in order to understand the growth mechanism. From SEM images, the as deposited Sn film was formed of difference diameter spherical agglomerations. As a function of the oxidation temperature, these spherical agglomerations shape changed due to the introduction of oxygen ions. The deformed spheres started to interconnect by forming bridges between them. The volume porosity, determined from the UV-Visible reflexion spectra, Changes as a function of the oxidation temperature. The variation of the crystalline fraction, determined from FTIR spectra, correlated with the variation of both the grain size and the volume porosity.

Keywords: tin oxide, RGTO, grain growth, volume porosity, crystalline fraction

Procedia PDF Downloads 258
1292 Swift Rising Pattern of Emerging Construction Technology Trends in the Construction Management

Authors: Gayatri Mahajan

Abstract:

Modern Construction Technology (CT) includes a broad range of advanced techniques and practices that bound the recent developments in material technology, design methods, quantity surveying, facility management, services, structural analysis and design, and other management education. Adoption of recent digital transformation technology is the need of today to speed up the business and is also the basis of construction improvement. Incorporating and practicing the technologies such as cloud-based communication and collaboration solution, Mobile Apps and 5G,3D printing, BIM and Digital Twins, CAD / CAM, AR/ VR, Big Data, IoT, Wearables, Blockchain, Modular Construction, Offsite Manifesting, Prefabrication, Robotic, Drones and GPS controlled equipment expedite the progress in the Construction industry (CI). Resources used are journaled research articles, web/net surfing, books, thesis, reports/surveys, magazines, etc. The outline of the research organization for this study is framed at four distinct levels in context to conceptualization, resources, innovative and emerging trends in CI, and better methods for completion of the construction projects. The present study conducted during 2020-2022 reveals that implementing these technologies improves the level of standards, planning, security, well-being, sustainability, and economics too. Application uses, benefits, impact, advantages/disadvantages, limitations and challenges, and policies are dealt with to provide information to architects and builders for smooth completion of the project. Results explain that construction technology trends vary from 4 to 15 for CI, and eventually, it reaches 27 for Civil Engineering (CE). The perspective of the most recent innovations, trends, tools, challenges, and solutions is highly embraced in the field of construction. The incorporation of the above said technologies in the pandemic Covid -19 and post-pandemic might lead to a focus on finding out effective ways to adopt new-age technologies for CI.

Keywords: BIM, drones, GPS, mobile apps, 5G, modular construction, robotics, 3D printing

Procedia PDF Downloads 105
1291 Streamlining Cybersecurity Risk Assessment for Industrial Control and Automation Systems: Leveraging the National Institute of Standard and Technology’s Risk Management Framework (RMF) Using Model-Based System Engineering (MBSE)

Authors: Gampel Alexander, Mazzuchi Thomas, Sarkani Shahram

Abstract:

The cybersecurity landscape is constantly evolving, and organizations must adapt to the changing threat environment to protect their assets. The implementation of the NIST Risk Management Framework (RMF) has become critical in ensuring the security and safety of industrial control and automation systems. However, cybersecurity professionals are facing challenges in implementing RMF, leading to systems operating without authorization and being non-compliant with regulations. The current approach to RMF implementation based on business practices is limited and insufficient, leaving organizations vulnerable to cyberattacks resulting in the loss of personal consumer data and critical infrastructure details. To address these challenges, this research proposes a Model-Based Systems Engineering (MBSE) approach to implementing cybersecurity controls and assessing risk through the RMF process. The study emphasizes the need to shift to a modeling approach, which can streamline the RMF process and eliminate bloated structures that make it difficult to receive an Authorization-To-Operate (ATO). The study focuses on the practical application of MBSE in industrial control and automation systems to improve the security and safety of operations. It is concluded that MBSE can be used to solve the implementation challenges of the NIST RMF process and improve the security of industrial control and automation systems. The research suggests that MBSE provides a more effective and efficient method for implementing cybersecurity controls and assessing risk through the RMF process. The future work for this research involves exploring the broader applicability of MBSE in different industries and domains. The study suggests that the MBSE approach can be applied to other domains beyond industrial control and automation systems.

Keywords: authorization-to-operate (ATO), industrial control systems (ICS), model-based system’s engineering (MBSE), risk management framework (RMF)

Procedia PDF Downloads 95
1290 Indian Premier League (IPL) Score Prediction: Comparative Analysis of Machine Learning Models

Authors: Rohini Hariharan, Yazhini R, Bhamidipati Naga Shrikarti

Abstract:

In the realm of cricket, particularly within the context of the Indian Premier League (IPL), the ability to predict team scores accurately holds significant importance for both cricket enthusiasts and stakeholders alike. This paper presents a comprehensive study on IPL score prediction utilizing various machine learning algorithms, including Support Vector Machines (SVM), XGBoost, Multiple Regression, Linear Regression, K-nearest neighbors (KNN), and Random Forest. Through meticulous data preprocessing, feature engineering, and model selection, we aimed to develop a robust predictive framework capable of forecasting team scores with high precision. Our experimentation involved the analysis of historical IPL match data encompassing diverse match and player statistics. Leveraging this data, we employed state-of-the-art machine learning techniques to train and evaluate the performance of each model. Notably, Multiple Regression emerged as the top-performing algorithm, achieving an impressive accuracy of 77.19% and a precision of 54.05% (within a threshold of +/- 10 runs). This research contributes to the advancement of sports analytics by demonstrating the efficacy of machine learning in predicting IPL team scores. The findings underscore the potential of advanced predictive modeling techniques to provide valuable insights for cricket enthusiasts, team management, and betting agencies. Additionally, this study serves as a benchmark for future research endeavors aimed at enhancing the accuracy and interpretability of IPL score prediction models.

Keywords: indian premier league (IPL), cricket, score prediction, machine learning, support vector machines (SVM), xgboost, multiple regression, linear regression, k-nearest neighbors (KNN), random forest, sports analytics

Procedia PDF Downloads 54
1289 Manganese Imidazole Complexes: Electrocatalytic Hydrogen Production

Authors: Vishakha Kaim, Mookan Natarajan, Sandeep Kaur-Ghumaan

Abstract:

Hydrogen is one of the most abundant elements present on earth’s crust and considered to be the simplest element in existence. It is not found naturally as a gas on earth and thus has to be manufactured. Hydrogen can be produced from a variety of sources, i.e., water, fossil fuels, or biomass and it is a byproduct of many chemical processes. It is also considered as a secondary source of energy commonly referred to as an energy carrier. Though hydrogen is not widely used as a fuel, it still has the potential for greater use in the future as a clean and renewable source of energy. Electrocatalysis is one of the important source for the production of hydrogen which could contribute to this prominent challenge. Metals such as platinum and palladium are considered efficient for hydrogen production but with limited applications. As a result, a wide variety of metal complexes with earth abundant elements and varied ligand environments have been explored for the electrochemical production of hydrogen. In nature, [FeFe] hydrogenase enzyme present in DesulfoVibrio desulfuricans and Clostridium pasteurianum catalyses the reversible interconversion of protons and electrons into dihydrogen. Since the first structure for the enzyme was reported in 1990s, a range of iron complexes has been synthesized as structural and functional mimics of the enzyme active site. Mn is one of the most desirable element for sustainable catalytic transformations, immediately behind Fe and Ti. Only limited number manganese complexes have been reported in the last two decades as catalysts for proton reduction. Furthermore, redox reactions could be carried out in a facile manner, due to the capability of manganese complexes to be stable at different oxidation states. Herein are reported, four µ2-thiolate bridged manganese complexes [Mn₂(CO)₆(μ-S₂N₄C₁₄H₁₀)] 1, [Mn₂(CO)7(μ- S₂N₄C₁₄H₁₀)] 2, Mn₂(CO)₆(μ-S₄N₂C₁₄H₁₀)] 3 and [Mn₂(CO)(μ- S₄N₂C₁₄H₁₀)] 4 have been synthesized and characterized. The cyclic voltammograms of the complexes displayed irreversible reduction peaks in the range - 0.9 to -1.3 V (vs. Fc⁺/Fc in acetonitrile at 0.1 Vs⁻¹). The complexes were catalytically active towards proton reduction in the presence of trifluoroacetic acid as seen from electrochemical investigations.

Keywords: earth abundant, electrocatalytic, hydrogen, manganese

Procedia PDF Downloads 173
1288 Comparison of Steel and Composite Analysis of a Multi-Storey Building

Authors: Çiğdem Avcı Karataş

Abstract:

Mitigation of structural damage caused by earthquake and reduction of fatality is one of the main concerns of engineers in seismic prone zones of the world. To achieve this aim many technologies have been developed in the last decades and applied in construction and retrofit of structures. On the one hand Turkey is well-known a country of high level of seismicity; on the other hand steel-composite structures appear competitive today in this country by comparison with other types of structures, for example only-steel or concrete structures. Composite construction is the dominant form of construction for the multi-storey building sector. The reason why composite construction is often so good can be expressed in one simple way - concrete is good in compression and steel is good in tension. By joining the two materials together structurally these strengths can be exploited to result in a highly efficient design. The reduced self-weight of composite elements has a knock-on effect by reducing the forces in those elements supporting them, including the foundations. The floor depth reductions that can be achieved using composite construction can also provide significant benefits in terms of the costs of services and the building envelope. The scope of this paper covers analysis, materials take-off, cost analysis and economic comparisons of a multi-storey building with composite and steel frames. The aim of this work is to show that designing load carrying systems as composite is more economical than designing as steel. Design of the nine stories building which is under consideration is done according to the regulation of the 2007, Turkish Earthquake Code and by using static and dynamic analysis methods. For the analyses of the steel and composite systems, plastic analysis methods have been used and whereas steel system analyses have been checked in compliance with EC3 and composite system analyses have been checked in compliance with EC4. At the end of the comparisons, it is revealed that composite load carrying systems analysis is more economical than the steel load carrying systems analysis considering the materials to be used in the load carrying system and the workmanship to be spent for this job.

Keywords: composite analysis, earthquake, steel, multi-storey building

Procedia PDF Downloads 571
1287 Analysis and Identification of Trends in Electric Vehicle Crash Data

Authors: Cody Stolle, Mojdeh Asadollahipajouh, Khaleb Pafford, Jada Iwuoha, Samantha White, Becky Mueller

Abstract:

Battery-electric vehicles (BEVs) are growing in sales and popularity in the United States as an alternative to traditional internal combustion engine vehicles (ICEVs). BEVs are generally heavier than corresponding models of ICEVs, with large battery packs located beneath the vehicle floorpan, a “skateboard” chassis, and have front and rear crush space available in the trunk and “frunk” or front trunk. The geometrical and frame differences between the vehicles may lead to incompatibilities with gasoline vehicles during vehicle-to-vehicle crashes as well as run-off-road crashes with roadside barriers, which were designed to handle lighter ICEVs with higher centers-of-mass and with dedicated structural chasses. Crash data were collected from 10 states spanning a five-year period between 2017 and 2021. Vehicle Identification Number (VIN) codes were processed with the National Highway Traffic Safety Administration (NHTSA) VIN decoder to extract BEV models from ICEV models. Crashes were filtered to isolate only vehicles produced between 2010 and 2021, and the crash circumstances (weather, time of day, maximum injury) were compared between BEVs and ICEVs. In Washington, 436,613 crashes were identified, which satisfied the selection criteria, and 3,371 of these crashes (0.77%) involved a BEV. The number of crashes which noted a fire were comparable between BEVs and ICEVs of similar model years (0.3% and 0.33%, respectively), and no differences were discernable for the time of day, weather conditions, road geometry, or other prevailing factors (e.g., run-off-road). However, crashes involving BEVs rose rapidly; 31% of all BEV crashes occurred in just 2021. Results indicate that BEVs are performing comparably to ICEVs, and events surrounding BEV crashes are statistically indistinguishable from ICEV crashes.

Keywords: battery-electric vehicles, transportation safety, infrastructure crashworthiness, run-off-road crashes, ev crash data analysis

Procedia PDF Downloads 89
1286 Numerical Study of Laminar Separation Bubble Over an Airfoil Using γ-ReθT SST Turbulence Model on Moderate Reynolds Number

Authors: Younes El Khchine, Mohammed Sriti

Abstract:

A parametric study has been conducted to analyse the flow around S809 airfoil of wind turbine in order to better understand the characteristics and effects of laminar separation bubble (LSB) on aerodynamic design for maximizing wind turbine efficiency. Numerical simulations were performed at low Reynolds number by solving the Unsteady Reynolds Averaged Navier-Stokes (URANS) equations based on C-type structural mesh and using γ-Reθt turbulence model. Two-dimensional study was conducted for the chord Reynolds number of 1×105 and angles of attack (AoA) between 0 and 20.15 degrees. The simulation results obtained for the aerodynamic coefficients at various angles of attack (AoA) were compared with XFoil results. A sensitivity study was performed to examine the effects of Reynolds number and free-stream turbulence intensity on the location and length of laminar separation bubble and aerodynamic performances of wind turbine. The results show that increasing the Reynolds number leads to a delay in the laminar separation on the upper surface of the airfoil. The increase in Reynolds number leads to an accelerate transition process and the turbulent reattachment point move closer to the leading edge owing to an earlier reattachment of the turbulent shear layer. This leads to a considerable reduction in the length of the separation bubble as the Reynolds number is increased. The increase of the level of free-stream turbulence intensity leads to a decrease in separation bubble length and an increase the lift coefficient while having negligible effects on the stall angle. When the AoA increased, the bubble on the suction airfoil surface was found to moves upstream to leading edge of the airfoil that causes earlier laminar separation.

Keywords: laminar separation bubble, turbulence intensity, S809 airfoil, transition model, Reynolds number

Procedia PDF Downloads 85
1285 Neo-liberalism and Theoretical Explanation of Poverty in Africa: The Nigerian Perspective

Authors: Omotoyosi Bilikies Ilori, Adekunle Saheed Ajisebiyawo

Abstract:

After the Second World War, there was an emergence of a new stage of capitalist globalization with its Neo-liberal ideology. There were global economic and political restructurings that affected third-world countries like Nigeria. Neo-liberalism is the driving force of globalization, which is the latest manifestation of imperialism that engenders endemic poverty in Nigeria. Poverty is severe and widespread in Nigeria. Poverty entails a situation where a person lives on less than one dollar per day and has no access to basic necessities of life. Poverty is inhuman and a breach of human rights. The Nigerian government initiated some strategies in the past to help in poverty reduction. Neo-liberalism manifested in the Third World, such as Nigeria, through the privatization of public enterprises, trade liberalization, and the rollback of the state investments in providing important social services. These main ideas of Neo-liberalism produced poverty in Nigeria and also encouraged the abandonment of the social contract between the government and the people. There is thus a gap in the provision of social services and subsidies for the masses, all of which Neo-liberal ideological positions contradict. This paper is a qualitative study which draws data from secondary sources. The theoretical framework is anchored on the market theory of capitalist globalization and public choice theory. The objectives of this study are to (i) examine the impacts of Neo-liberalism on poverty in Nigeria as a typical example of a Third World country and (ii) find out the effects of Neo-liberalism on the provision of social services and subsidies and employment. The findings from this study revealed that (i) the adoption of the Neo-liberal ideology by the Nigerian government has led to increased poverty and poor provision of social services and employment in Nigeria; and (ii) there is an increase in foreign debts which compounds poverty situation in Nigeria. This study makes the following recommendations: (i) Government should adopt strategies that are pro-poor to eradicate poverty; (ii) The Trade Unions and the masses should develop strategies to challenge Neo-liberalism and reject Neo-liberal ideology.

Keywords: neo-liberalism, poverty, employment, poverty reduction, structural adjustment programme

Procedia PDF Downloads 86
1284 Adopting a Comparative Cultural Studies Approach to Teaching Writing in the Global Classroom

Authors: Madhura Bandyopadhyay

Abstract:

Teaching writing within multicultural and multiethnic communities poses many unique challenges not the least of which is that of intercultural communication. When the writing is in English, pedagogical imperatives often encounter the universalizing tendencies of standardization of both language use and structural parameters which are often at odds with maintaining local practices which preserve cultural pluralism. English often becomes the contact zone within which individual identities of students play out against the standardization imperatives of the larger world. Writing classes can serve as places which become instruments of assimilation of ethnic minorities to a larger globalizing or nationalistic agenda. Hence, for those outside of the standard practices of writing English, adaptability towards a mastery of those practices valued as standard become the focus of teaching taking away from diversity of local English use and other modes of critical thinking. In a very multicultural and multiethnic context such as the US or Singapore, these dynamics become very important. This paper will argue that multiethnic writing classrooms can greatly benefit from taking up a cultural studies approach whereby the students’ lived environments and experiences are analyzed as cultural texts to produce writing. Such an approach eliminates limitations of using both literary texts as foci of discussion as in traditional approaches to teaching writing and the current trend in teaching composition without using texts at all. By bringing in students’ lived experiences into the classroom and analyzing them as cultural compositions stressing the ability to communicate across cultures, cultural competency is valued rather than adaptability while privileging pluralistic experiences as valuable even as universal shared experience are found. Specifically, while teaching writing in English in a multicultural classroom, a cultural studies approach makes both teacher and student aware of the diversity of the English language as it exists in our global context in the students’ experience while making space for diversity in critical thinking, structure and organization of writing effective in an intercultural context.

Keywords: English, multicultural, teaching, writing

Procedia PDF Downloads 509
1283 KTiPO4F: The Negative Electrode Material for Potassium Batteries

Authors: Vahid Ramezankhani, Keith J. Stevenson, Stanislav. S. Fedotov

Abstract:

Lithium-ion batteries (LIBs) play a pivotal role in achieving the key objective “zero-carbon emission” as countries agreed to reach a 1.5ᵒC global warming target according to the Paris agreement. Nowadays, due to the tremendous mobile and stationary consumption of small/large-format LIBs, the demand and consequently the price for such energy storage devices have been raised. The aforementioned challenges originate from the shrinkage of the major applied critical materials in these batteries, such as cobalt (Co), nickel (Ni), Lithium (Li), graphite (G), and manganese (Mn). Therefore, it is imperative to consider alternative elements to address issues corresponding to the limitation of resources around the globe. Potassium (K) is considered an effective alternative to Li since K is a more abundant element, has a higher operating potential, a faster diffusion rate, and the lowest stokes radius in comparison to the closest neighbors in the periodic table (Li and Na). Among all reported materials for metal-ion batteries, some of them possess the general formula AMXO4L [A = Li, Na, K; M = Fe, Ti, V; X = P, S, Si; L= O, F, OH] is of potential to be applied both as anode and cathode and enable researchers to investigate them in the full symmetric battery format. KTiPO4F (KTP structural material) has been previously reported by our group as a promising cathode with decent electronic properties. Herein, we report a synthesis, crystal structure characterization, morphology, as well as K-ion storage properties of KTiPO4F. Our investigation reveals that KTiPO4F delivers discharge capacity > 150 mAh/g at 26.6 mA/g (C/5 current rate) in the potential window of 0.001-3 V. Surprisingly, the cycling performance of C-KTiPO4F//K cell is stable for 1000 cycles at 130 mA/g (C current rate), presenting capacity > 130 mAh/g. More interestingly, we achieved to assemble full symmetric batteries where carbon-coated KTiPO4F serves as both negative and positive electrodes, delivering >70 mAh/g in the potential range of 0.001-4.2V.

Keywords: anode material, potassium battery, chemical characterization, electrochemical properties

Procedia PDF Downloads 221
1282 Suitable Models and Methods for the Steady-State Analysis of Multi-Energy Networks

Authors: Juan José Mesas, Luis Sainz

Abstract:

The motivation for the development of this paper lies in the need for energy networks to reduce losses, improve performance, optimize their operation and try to benefit from the interconnection capacity with other networks enabled for other energy carriers. These interconnections generate interdependencies between some energy networks and others, which requires suitable models and methods for their analysis. Traditionally, the modeling and study of energy networks have been carried out independently for each energy carrier. Thus, there are well-established models and methods for the steady-state analysis of electrical networks, gas networks, and thermal networks separately. What is intended is to extend and combine them adequately to be able to face in an integrated way the steady-state analysis of networks with multiple energy carriers. Firstly, the added value of multi-energy networks, their operation, and the basic principles that characterize them are explained. In addition, two current aspects of great relevance are exposed: the storage technologies and the coupling elements used to interconnect one energy network with another. Secondly, the characteristic equations of the different energy networks necessary to carry out the steady-state analysis are detailed. The electrical network, the natural gas network, and the thermal network of heat and cold are considered in this paper. After the presentation of the equations, a particular case of the steady-state analysis of a specific multi-energy network is studied. This network is represented graphically, the interconnections between the different energy carriers are described, their technical data are exposed and the equations that have previously been presented theoretically are formulated and developed. Finally, the two iterative numerical resolution methods considered in this paper are presented, as well as the resolution procedure and the results obtained. The pros and cons of the application of both methods are explained. It is verified that the results obtained for the electrical network (voltages in modulus and angle), the natural gas network (pressures), and the thermal network (mass flows and temperatures) are correct since they comply with the distribution, operation, consumption and technical characteristics of the multi-energy network under study.

Keywords: coupling elements, energy carriers, multi-energy networks, steady-state analysis

Procedia PDF Downloads 79
1281 Research of the Load Bearing Capacity of Inserts Embedded in CFRP under Different Loading Conditions

Authors: F. Pottmeyer, M. Weispfenning, K. A. Weidenmann

Abstract:

Continuous carbon fiber reinforced plastics (CFRP) exhibit a high application potential for lightweight structures due to their outstanding specific mechanical properties. Embedded metal elements, so-called inserts, can be used to join structural CFRP parts. Drilling of the components to be joined can be avoided using inserts. In consequence, no bearing stress is anticipated. This is a distinctive benefit of embedded inserts, since continuous CFRP have low shear and bearing strength. This paper aims at the investigation of the load bearing capacity after preinduced damages from impact tests and thermal-cycling. In addition, characterization of mechanical properties during dynamic high speed pull-out testing under different loading velocities was conducted. It has been shown that the load bearing capacity increases up to 100% for very high velocities (15 m/s) in comparison with quasi-static loading conditions (1.5 mm/min). Residual strength measurements identified the influence of thermal loading and preinduced mechanical damage. For both, the residual strength was evaluated afterwards by quasi-static pull-out tests. Taking into account the DIN EN 6038 a high decrease of force occurs at impact energy of 16 J with significant damage of the laminate. Lower impact energies of 6 J, 9 J, and 12 J do not decrease the measured residual strength, although the laminate is visibly damaged - distinguished by cracks on the rear side. To evaluate the influence of thermal loading, the specimens were placed in a climate chamber and were exposed to various numbers of temperature cycles. One cycle took 1.5 hours from -40 °C to +80 °C. It could be shown that already 10 temperature cycles decrease the load bearing capacity up to 20%. Further reduction of the residual strength with increasing number of thermal cycles was not observed. Thus, it implies that the maximum damage of the composite is already induced after 10 temperature cycles.

Keywords: composite, joining, inserts, dynamic loading, thermal loading, residual strength, impact

Procedia PDF Downloads 280
1280 Computation and Validation of the Stress Distribution around a Circular Hole in a Slab Undergoing Plastic Deformation

Authors: Sherif D. El Wakil, John Rice

Abstract:

The aim of the current work was to employ the finite element method to model a slab, with a small hole across its width, undergoing plastic plane strain deformation. The computational model had, however, to be validated by comparing its results with those obtained experimentally. Since they were in good agreement, the finite element method can therefore be considered a reliable tool that can help gain better understanding of the mechanism of ductile failure in structural members having stress raisers. The finite element software used was ANSYS, and the PLANE183 element was utilized. It is a higher order 2-D, 8-node or 6-node element with quadratic displacement behavior. A bilinear stress-strain relationship was used to define the material properties, with constants similar to those of the material used in the experimental study. The model was run for several tensile loads in order to observe the progression of the plastic deformation region, and the stress concentration factor was determined in each case. The experimental study involved employing the visioplasticity technique, where a circular mesh (each circle was 0.5 mm in diameter, with 0.05 mm line thickness) was initially printed on the side of an aluminum slab having a small hole across its width. Tensile loading was then applied to produce a small increment of plastic deformation. Circles in the plastic region became ellipses, where the directions of the principal strains and stresses coincided with the major and minor axes of the ellipses. Next, we were able to determine the directions of the maximum and minimum shear stresses at the center of each ellipse, and the slip-line field was then constructed. We were then able to determine the stress at any point in the plastic deformation zone, and hence the stress concentration factor. The experimental results were found to be in good agreement with the analytical ones.

Keywords: finite element method to model a slab, slab undergoing plastic deformation, stress distribution around a circular hole, visioplasticity

Procedia PDF Downloads 319
1279 Reliability and Maintainability Optimization for Aircraft’s Repairable Components Based on Cost Modeling Approach

Authors: Adel A. Ghobbar

Abstract:

The airline industry is continuously challenging how to safely increase the service life of the aircraft with limited maintenance budgets. Operators are looking for the most qualified maintenance providers of aircraft components, offering the finest customer service. Component owner and maintenance provider is offering an Abacus agreement (Aircraft Component Leasing) to increase the efficiency and productivity of the customer service. To increase the customer service, the current focus on No Fault Found (NFF) units must change into the focus on Early Failure (EF) units. Since the effect of EF units has a significant impact on customer satisfaction, this needs to increase the reliability of EF units at minimal cost, which leads to the goal of this paper. By identifying the reliability of early failure (EF) units with regards to No Fault Found (NFF) units, in particular, the root cause analysis with an integrated cost analysis of EF units with the use of a failure mode analysis tool and a cost model, there will be a set of EF maintenance improvements. The data used for the investigation of the EF units will be obtained from the Pentagon system, an Enterprise Resource Planning (ERP) system used by Fokker Services. The Pentagon system monitors components, which needs to be repaired from Fokker aircraft owners, Abacus exchange pool, and commercial customers. The data will be selected on several criteria’s: time span, failure rate, and cost driver. When the selected data has been acquired, the failure mode and root cause analysis of EF units are initiated. The failure analysis approach tool was implemented, resulting in the proposed failure solution of EF. This will lead to specific EF maintenance improvements, which can be set-up to decrease the EF units and, as a result of this, increasing the reliability. The investigated EFs, between the time period over ten years, showed to have a significant reliability impact of 32% on the total of 23339 unscheduled failures. Since the EFs encloses almost one-third of the entire population.

Keywords: supportability, no fault found, FMEA, early failure, availability, operational reliability, predictive model

Procedia PDF Downloads 127
1278 Using ANN in Emergency Reconstruction Projects Post Disaster

Authors: Rasha Waheeb, Bjorn Andersen, Rafa Shakir

Abstract:

Purpose The purpose of this study is to avoid delays that occur in emergency reconstruction projects especially in post disaster circumstances whether if they were natural or manmade due to their particular national and humanitarian importance. We presented a theoretical and practical concepts for projects management in the field of construction industry that deal with a range of global and local trails. This study aimed to identify the factors of effective delay in construction projects in Iraq that affect the time and the specific quality cost, and find the best solutions to address delays and solve the problem by setting parameters to restore balance in this study. 30 projects were selected in different areas of construction were selected as a sample for this study. Design/methodology/approach This study discusses the reconstruction strategies and delay in time and cost caused by different delay factors in some selected projects in Iraq (Baghdad as a case study).A case study approach was adopted, with thirty construction projects selected from the Baghdad region, of different types and sizes. Project participants from the case projects provided data about the projects through a data collection instrument distributed through a survey. Mixed approach and methods were applied in this study. Mathematical data analysis was used to construct models to predict delay in time and cost of projects before they started. The artificial neural networks analysis was selected as a mathematical approach. These models were mainly to help decision makers in construction project to find solutions to these delays before they cause any inefficiency in the project being implemented and to strike the obstacles thoroughly to develop this industry in Iraq. This approach was practiced using the data collected through survey and questionnaire data collection as information form. Findings The most important delay factors identified leading to schedule overruns were contractor failure, redesigning of designs/plans and change orders, security issues, selection of low-price bids, weather factors, and owner failures. Some of these are quite in line with findings from similar studies in other countries/regions, but some are unique to the Iraqi project sample, such as security issues and low-price bid selection. Originality/value we selected ANN’s analysis first because ANN’s was rarely used in project management , and never been used in Iraq to finding solutions for problems in construction industry. Also, this methodology can be used in complicated problems when there is no interpretation or solution for a problem. In some cases statistical analysis was conducted and in some cases the problem is not following a linear equation or there was a weak correlation, thus we suggested using the ANN’s because it is used for nonlinear problems to find the relationship between input and output data and that was really supportive.

Keywords: construction projects, delay factors, emergency reconstruction, innovation ANN, post disasters, project management

Procedia PDF Downloads 165
1277 The Effect of Artificial Intelligence on Digital Factory

Authors: Sherif Fayez Lewis Ghaly

Abstract:

up to datefacupupdated planning has the mission of designing merchandise, plant life, procedures, enterprise, regions, and the development of a up to date. The requirements for up-to-date planning and the constructing of a updated have changed in recent years. everyday restructuring is turning inupupdated greater essential up-to-date hold the competitiveness of a manufacturing facilityupdated. restrictions in new regions, shorter existence cycles of product and manufacturing generation up-to-date a VUCA global (Volatility, Uncertainty, Complexity & Ambiguity) up-to-date greater frequent restructuring measures inside a manufacturing facilityupdated. A virtual up-to-date model is the making plans basis for rebuilding measures and up-to-date an fundamental up-to-date. short-time period rescheduling can now not be handled through on-web site inspections and manual measurements. The tight time schedules require 3177227fc5dac36e3e5ae6cd5820dcaa making plans fashions. updated the high variation fee of facup-to-dateries defined above, a method for rescheduling facupdatedries on the idea of a modern-day digital up to datery dual is conceived and designed for sensible software in updated restructuring projects. the point of interest is on rebuild approaches. The purpose is up-to-date preserve the planning basis (virtual up-to-date model) for conversions within a up to datefacupupdated updated. This calls for the application of a methodology that reduces the deficits of present techniques. The goal is up-to-date how a digital up to datery version may be up to date up to date during ongoing up to date operation. a method up-to-date on phoup to dategrammetry technology is presented. the focus is on developing a easy and fee-powerful up to date tune the numerous adjustments that arise in a manufacturing unit constructing in the course of operation. The method is preceded with the aid of a hardware and software assessment up-to-date become aware of the most cost effective and quickest version.

Keywords: building information modeling, digital factory model, factory planning, maintenance digital factory model, photogrammetry, restructuring

Procedia PDF Downloads 29
1276 Application of Sentinel-2 Data to Evaluate the Role of Mangrove Conservation and Restoration on Aboveground Biomass

Authors: Raheleh Farzanmanesh, Christopher J. Weston

Abstract:

Mangroves are forest ecosystems located in the inter-tidal regions of tropical and subtropical coastlines that provide many valuable economic and ecological benefits for millions of people, such as preventing coastal erosion, providing breeding, and feeding grounds, improving water quality, and supporting the well-being of local communities. In addition, mangroves capture and store high amounts of carbon in biomass and soils that play an important role in combating climate change. The decline in mangrove area has prompted government and private sector interest in mangrove conservation and restoration projects to achieve multiple Sustainable Development Goals, from reducing poverty to improving life on land. Mangrove aboveground biomass plays an essential role in the global carbon cycle, climate change mitigation and adaptation by reducing CO2 emissions. However, little information is available about the effectiveness of mangrove sustainable management on mangrove change area and aboveground biomass (AGB). Here, we proposed a method for mapping, modeling, and assessing mangrove area and AGB in two Global Environment Facility (GEF) blue forests projects based on Sentinel-2 Level 1C imagery during their conservation lifetime. The SVR regression model was used to estimate AGB in Tahiry Honko project in Madagascar and the Abu Dhabi Blue Carbon Demonstration Project (Abu Dhabi Emirates. The results showed that mangrove forests and AGB declined in the Tahiry Honko project, while in the Abu Dhabi project increased after the conservation initiative was established. The results provide important information on the impact of mangrove conservation activities and contribute to the development of remote sensing applications for mapping and assessing mangrove forests in blue carbon initiatives.

Keywords: blue carbon, mangrove forest, REDD+, aboveground biomass, Sentinel-2

Procedia PDF Downloads 73
1275 Investigating the Flow Physics within Vortex-Shockwave Interactions

Authors: Frederick Ferguson, Dehua Feng, Yang Gao

Abstract:

No doubt, current CFD tools have a great many technical limitations, and active research is being done to overcome these limitations. Current areas of limitations include vortex-dominated flows, separated flows, and turbulent flows. In general, turbulent flows are unsteady solutions to the fluid dynamic equations, and instances of these solutions can be computed directly from the equations. One of the approaches commonly implemented is known as the ‘direct numerical simulation’, DNS. This approach requires a spatial grid that is fine enough to capture the smallest length scale of the turbulent fluid motion. This approach is called the ‘Kolmogorov scale’ model. It is of interest to note that the Kolmogorov scale model must be captured throughout the domain of interest and at a correspondingly small-time step. In typical problems of industrial interest, the ratio of the length scale of the domain to the Kolmogorov length scale is so great that the required grid set becomes prohibitively large. As a result, the available computational resources are usually inadequate for DNS related tasks. At this time in its development, DNS is not applicable to industrial problems. In this research, an attempt is made to develop a numerical technique that is capable of delivering DNS quality solutions at the scale required by the industry. To date, this technique has delivered preliminary results for both steady and unsteady, viscous and inviscid, compressible and incompressible, and for both high and low Reynolds number flow fields that are very accurate. Herein, it is proposed that the Integro-Differential Scheme (IDS) be applied to a set of vortex-shockwave interaction problems with the goal of investigating the nonstationary physics within the resulting interaction regions. In the proposed paper, the IDS formulation and its numerical error capability will be described. Further, the IDS will be used to solve the inviscid and viscous Burgers equation, with the goal of analyzing their solutions over a considerable length of time, thus demonstrating the unsteady capabilities of the IDS. Finally, the IDS will be used to solve a set of fluid dynamic problems related to flow that involves highly vortex interactions. Plans are to solve the following problems: the travelling wave and vortex problems over considerable lengths of time, the normal shockwave–vortex interaction problem for low supersonic conditions and the reflected oblique shock–vortex interaction problem. The IDS solutions obtained in each of these solutions will be explored further in efforts to determine the distributed density gradients and vorticity, as well as the Q-criterion. Parametric studies will be conducted to determine the effects of the Mach number on the intensity of vortex-shockwave interactions.

Keywords: vortex dominated flows, shockwave interactions, high Reynolds number, integro-differential scheme

Procedia PDF Downloads 137
1274 Development of PPy-M Composites Materials for Sensor Application

Authors: Yatimah Alias, Tilagam Marimuthu, M. R. Mahmoudian, Sharifah Mohamad

Abstract:

The rapid growth of science and technology in energy and environmental fields has enlightened the substantial importance of the conducting polymer and metal composite materials engineered at nano-scale. In this study, polypyrrole-cobalt composites (PPy-Co Cs) and polypyrrole-nickel oxide composites (PPy-NiO Cs) were prepared by a simple and facile chemical polymerization method with an aqueous solution of pyrrole monomer in the presence of metal salt. These composites then fabricated into non-enzymatic hydrogen peroxide (H2O2) and glucose sensor. The morphology and composition of the composites are characterized by the Field Emission Scanning Electron Microscope, Fourier Transform Infrared Spectrum and X-ray Powder Diffraction. The obtained results were compared with the pure PPy and metal oxide particles. The structural and morphology properties of synthesized composites are different from those of pure PPy and metal oxide particles, which were attributed to the strong interaction between the PPy and the metal particles. Besides, a favorable micro-environment for the electrochemical oxidation of H2O2 and glucose was achieved on the modified glassy carbon electrode (GCE) coated with PPy-Co Cs and PPy-NiO Cs respectively, resulting in an enhanced amperometric response. Both PPy-Co/GCE and PPy-NiO/GCE give high response towards target analyte at optimum condition of 500 μl pyrrole monomer content. Furthermore, the presence of pyrrole monomer greatly increases the sensitivity of the respective modified electrode. The PPy-Co/GCE could detect H2O2 in a linear range of 20 μM to 80 mM with two linear segments (low and high concentration of H2O2) and the detection limit for both ranges is 2.05 μM and 19.64 μM, respectively. Besides, PPy-NiO/GCE exhibited good electrocatalytic behavior towards glucose oxidation in alkaline medium and could detect glucose in linear ranges of 0.01 mM to 0.50 mM and 1 mM to 20 mM with detection limit of 0.33 and 5.77 μM, respectively. The ease of modifying and the long-term stability of this sensor have made it superior to enzymatic sensors, which must kept in a critical environment.

Keywords: metal oxide, composite, non-enzymatic sensor, polypyrrole

Procedia PDF Downloads 266
1273 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 127
1272 A Cooperative Signaling Scheme for Global Navigation Satellite Systems

Authors: Keunhong Chae, Seokho Yoon

Abstract:

Recently, the global navigation satellite system (GNSS) such as Galileo and GPS is employing more satellites to provide a higher degree of accuracy for the location service, thus calling for a more efficient signaling scheme among the satellites used in the overall GNSS network. In that the network throughput is improved, the spatial diversity can be one of the efficient signaling schemes; however, it requires multiple antenna that could cause a significant increase in the complexity of the GNSS. Thus, a diversity scheme called the cooperative signaling was proposed, where the virtual multiple-input multiple-output (MIMO) signaling is realized with using only a single antenna in the transmit satellite of interest and with modeling the neighboring satellites as relay nodes. The main drawback of the cooperative signaling is that the relay nodes receive the transmitted signal at different time instants, i.e., they operate in an asynchronous way, and thus, the overall performance of the GNSS network could degrade severely. To tackle the problem, several modified cooperative signaling schemes were proposed; however, all of them are difficult to implement due to a signal decoding at the relay nodes. Although the implementation at the relay nodes could be simpler to some degree by employing the time-reversal and conjugation operations instead of the signal decoding, it would be more efficient if we could implement the operations of the relay nodes at the source node having more resources than the relay nodes. So, in this paper, we propose a novel cooperative signaling scheme, where the data signals are combined in a unique way at the source node, thus obviating the need of the complex operations such as signal decoding, time-reversal and conjugation at the relay nodes. The numerical results confirm that the proposed scheme provides the same performance in the cooperative diversity and the bit error rate (BER) as the conventional scheme, while reducing the complexity at the relay nodes significantly. Acknowledgment: This work was supported by the National GNSS Research Center program of Defense Acquisition Program Administration and Agency for Defense Development.

Keywords: global navigation satellite network, cooperative signaling, data combining, nodes

Procedia PDF Downloads 280
1271 Development and Validation of a Turbidimetric Bioassay to Determine the Potency of Ertapenem Sodium

Authors: Tahisa M. Pedroso, Hérida R. N. Salgado

Abstract:

The microbiological turbidimetric assay allows the determination of potency of the drug, by measuring the turbidity (absorbance), caused by inhibition of microorganisms by ertapenem sodium. Ertapenem sodium (ERTM), a synthetic antimicrobial agent of the class of carbapenems, shows action against Gram-negative, Gram-positive, aerobic and anaerobic microorganisms. Turbidimetric assays are described in the literature for some antibiotics, but this method is not described for ertapenem. The objective of the present study was to develop and validate a simple, sensitive, precise and accurate microbiological assay by turbidimetry to quantify ertapenem sodium injectable as an alternative to the physicochemical methods described in the literature. Several preliminary tests were performed to choose the following parameters: Staphylococcus aureus ATCC 25923, IAL 1851, 8 % of inoculum, BHI culture medium, and aqueous solution of ertapenem sodium. 10.0 mL of sterile BHI culture medium were distributed in 20 tubes. 0.2 mL of solutions (standard and test), were added in tube, respectively S1, S2 and S3, and T1, T2 and T3, 0.8 mL of culture medium inoculated were transferred to each tube, according parallel lines 3 x 3 test. The tubes were incubated in shaker Marconi MA 420 at a temperature of 35.0 °C ± 2.0 °C for 4 hours. After this period, the growth of microorganisms was inhibited by addition of 0.5 mL of 12% formaldehyde solution in each tube. The absorbance was determined in Quimis Q-798DRM spectrophotometer at a wavelength of 530 nm. An analytical curve was constructed to obtain the equation of the line by the least-squares method and the linearity and parallelism was detected by ANOVA. The specificity of the method was proven by comparing the response obtained for the standard and the finished product. The precision was checked by testing the determination of ertapenem sodium in three days. The accuracy was determined by recovery test. The robustness was determined by comparing the results obtained by varying wavelength, brand of culture medium and volume of culture medium in the tubes. Statistical analysis showed that there is no deviation from linearity in the analytical curves of standard and test samples. The correlation coefficients were 0.9996 and 0.9998 for the standard and test samples, respectively. The specificity was confirmed by comparing the absorbance of the reference substance and test samples. The values obtained for intraday, interday and between analyst precision were 1.25%; 0.26%, 0.15% respectively. The amount of ertapenem sodium present in the samples analyzed, 99.87%, is consistent. The accuracy was proven by the recovery test, with value of 98.20%. The parameters varied did not affect the analysis of ertapenem sodium, confirming the robustness of this method. The turbidimetric assay is more versatile, faster and easier to apply than agar diffusion assay. The method is simple, rapid and accurate and can be used in routine analysis of quality control of formulations containing ertapenem sodium.

Keywords: ertapenem sodium, turbidimetric assay, quality control, validation

Procedia PDF Downloads 393
1270 Verification and Validation of Simulated Process Models of KALBR-SIM Training Simulator

Authors: T. Jayanthi, K. Velusamy, H. Seetha, S. A. V. Satya Murty

Abstract:

Verification and Validation of Simulated Process Model is the most important phase of the simulator life cycle. Evaluation of simulated process models based on Verification and Validation techniques checks the closeness of each component model (in a simulated network) with the real system/process with respect to dynamic behaviour under steady state and transient conditions. The process of Verification and validation helps in qualifying the process simulator for the intended purpose whether it is for providing comprehensive training or design verification. In general, model verification is carried out by comparison of simulated component characteristics with the original requirement to ensure that each step in the model development process completely incorporates all the design requirements. Validation testing is performed by comparing the simulated process parameters to the actual plant process parameters either in standalone mode or integrated mode. A Full Scope Replica Operator Training Simulator for PFBR - Prototype Fast Breeder Reactor has been developed at IGCAR, Kalpakkam, INDIA named KALBR-SIM (Kalpakkam Breeder Reactor Simulator) wherein the main participants are engineers/experts belonging to Modeling Team, Process Design and Instrumentation and Control design team. This paper discusses the Verification and Validation process in general, the evaluation procedure adopted for PFBR operator training Simulator, the methodology followed for verifying the models, the reference documents and standards used etc. It details out the importance of internal validation by design experts, subsequent validation by external agency consisting of experts from various fields, model improvement by tuning based on expert’s comments, final qualification of the simulator for the intended purpose and the difficulties faced while co-coordinating various activities.

Keywords: Verification and Validation (V&V), Prototype Fast Breeder Reactor (PFBR), Kalpakkam Breeder Reactor Simulator (KALBR-SIM), steady state, transient state

Procedia PDF Downloads 266
1269 Gluability of Bambusa balcooa and Bambusa vulgaris for Development of Laminated Panels

Authors: Daisy Biswas, Samar Kanti Bose, M. Mozaffar Hossain

Abstract:

The development of value added composite products from bamboo with the application of gluing technology can play a vital role in economic development and also in forest resource conservation of any country. In this study, the gluability of Bambusa balcooa and Bambusa vulgaris, two locally grown bamboo species of Bangladesh was assessed. As the culm wall thickness of bamboos decreases from bottom to top, a culm portion of up to 5.4 m and 3.6 m were used from the base of B. balcooa and B. vulgaris, respectively, to get rectangular strips of uniform thickness. The color of the B. vulgaris strips was yellowish brown and that of B. balcooa was reddish brown. The strips were treated in borax-boric, bleaching and carbonization for extending the service life of the laminates. The preservative treatments changed the color of the strips. Borax–boric acid treated strips were reddish brown. When bleached with hydrogen peroxide, the color of the strips turned into whitish yellow. Carbonization produced dark brownish strips having coffee flavor. Chemical constituents for untreated and treated strips were determined. B. vulgaris was more acidic than B. balcooa. Then the treated strips were used to develop three-layered bamboo laminated panel. Urea formaldehyde (UF) and polyvinyl acetate (PVA) were used as binder. The shear strength and abrasive resistance of the panel were evaluated. It was found that the shear strength of the UF-panel was higher than the PVA-panel for all treatments. Between the species, gluability of B. vulgaris was better and in some cases better than hardwood species. The abrasive resistance of B. balcooa is slightly higher than B. vulgaris; however, the latter was preferred as it showed well gluability. The panels could be used as structural panel, floor tiles, flat pack furniture component, and wall panel etc. However, further research on durability and creep behavior of the product in service condition is warranted.

Keywords: Bambusa balcooa, Bambusa vulgaris, polyvinyl acetate, urea formaldehyde

Procedia PDF Downloads 262
1268 Critique of the City-Machine: Dismantling the Scientific Socialist Utopia of Soviet Territorialization

Authors: Rachel P. Vasconcellos

Abstract:

The Russian constructivism is usually enshrined in history as another ''modernist ism'', that is, as an artistic phenomenon related to the early twentieth century‘s zeitgeist. What we aim in this essay is to analyze the constructivist movement not over the Art History field neither through the aesthetic debate, but through a geographical critical theory, taking the main idea of construction in the concrete sense of production of space. Seen from the perspective of the critique of space, the constructivist production is presented as a plan of totality, designed as socialist society‘s spatiality, contemplating and articulating all its scalar levels: the objects of everyday life, the building, the city and the territory. The constructivist avant-garde manifests a geographical ideology, launching the foundation‘s basis of modern planning ideology. Taken in its political sense, the artistic avant-garde of the Russian Revolution intended to anticipate the forms of a social future already put in progress: their plastic research pointed to new formal expressions to revolutionary contents. With the foundation of new institutions under a new State, it was given to the specialized labor of artists, architects, and planners the task of designing the socialist society, based on the thesis of scientific socialism. Their projects were developed under the politico-economics imperatives to the Soviet modernization – that is: the structural needs of industrialization and inclusion of all people in the productive work universe. This context shapes the creative atmosphere of the constructivist avant-garde, which uses the methods of engineering to the transform everyday life. Architecture, urban planning, and state planning integrated must then operate as spatial arrangement morphologically able to produce socialist life. But due to the intrinsic contradictions of the process, the rational and geometric aesthetic of the City-Machine appears, finally, as an image of a scientific socialist utopia.

Keywords: city-machine, critique of space, production of space, soviet territorialization

Procedia PDF Downloads 277
1267 Assessing Future Offshore Wind Farms in the Gulf of Roses: Insights from Weather Research and Forecasting Model Version 4.2

Authors: Kurias George, Ildefonso Cuesta Romeo, Clara Salueña Pérez, Jordi Sole Olle

Abstract:

With the growing prevalence of wind energy there is a need, for modeling techniques to evaluate the impact of wind farms on meteorology and oceanography. This study presents an approach that utilizes the WRF (Weather Research and Forecasting )with that include a Wind Farm Parametrization model to simulate the dynamics around Parc Tramuntana project, a offshore wind farm to be located near the Gulf of Roses off the coast of Barcelona, Catalonia. The model incorporates parameterizations for wind turbines enabling a representation of the wind field and how it interacts with the infrastructure of the wind farm. Current results demonstrate that the model effectively captures variations in temeperature, pressure and in both wind speed and direction over time along with their resulting effects on power output from the wind farm. These findings are crucial for optimizing turbine placement and operation thus improving efficiency and sustainability of the wind farm. In addition to focusing on atmospheric interactions, this study delves into the wake effects within the turbines in the farm. A range of meteorological parameters were also considered to offer a comprehensive understanding of the farm's microclimate. The model was tested under different horizontal resolutions and farm layouts to scrutinize the wind farm's effects more closely. These experimental configurations allow for a nuanced understanding of how turbine wakes interact with each other and with the broader atmospheric and oceanic conditions. This modified approach serves as a potent tool for stakeholders in renewable energy, environmental protection, and marine spatial planning. environmental protection and marine spatial planning. It provides a range of information regarding the environmental and socio economic impacts of offshore wind energy projects.

Keywords: weather research and forecasting, wind turbine wake effects, environmental impact, wind farm parametrization, sustainability analysis

Procedia PDF Downloads 72
1266 Modeling of Cf-252 and PuBe Neutron Sources by Monte Carlo Method in Order to Develop Innovative BNCT Therapy

Authors: Marta Błażkiewicz, Adam Konefał

Abstract:

Currently, boron-neutron therapy is carried out mainly with the use of a neutron beam generated in research nuclear reactors. This fact limits the possibility of realization of a BNCT in centers distant from the above-mentioned reactors. Moreover, the number of active nuclear reactors in operation in the world is decreasing due to the limited lifetime of their operation and the lack of new installations. Therefore, the possibilities of carrying out boron-neutron therapy based on the neutron beam from the experimental reactor are shrinking. However, the use of nuclear power reactors for BNCT purposes is impossible due to the infrastructure not intended for radiotherapy. Therefore, a serious challenge is to find ways to perform boron-neutron therapy based on neutrons generated outside the research nuclear reactor. This work meets this challenge. Its goal is to develop a BNCT technique based on commonly available neutron sources such as Cf-252 and PuBe, which will enable the above-mentioned therapy in medical centers unrelated to nuclear research reactors. Advances in the field of neutron source fabrication make it possible to achieve strong neutron fluxes. The current stage of research focuses on the development of virtual models of the above-mentioned sources using the Monte Carlo simulation method. In this study, the GEANT4 tool was used, including the model for simulating neutron-matter interactions - High Precision Neutron. Models of neutron sources were developed on the basis of experimental verification based on the activation detectors method with the use of indium foil and the cadmium differentiation method allowing to separate the indium activation contribution from thermal and resonance neutrons. Due to the large number of factors affecting the result of the verification experiment, the 10% discrepancy between the simulation and experiment results was accepted.

Keywords: BNCT, virtual models, neutron sources, monte carlo, GEANT4, neutron activation detectors, gamma spectroscopy

Procedia PDF Downloads 186