Search results for: intelligent methods
13694 Geopolymerization Methods for Clay Soils Treatment
Authors: Baba Hassane Ahmed Hisseini, Abdelkrim Bennabi, Rabah Hamzaoui, Lamis Makki, Gaetan Blanck
Abstract:
Most of the clay soils are known as problematic soils due to their water content, which varies greatly over time. It is observed that they are used to be subject to shrinkage and swelling, thus causing a problem of stability on the structures of civil engineering construction work. They are often excavated and placed in a storage area giving rise to the opening of new quarries. This method has become obsolete today because to protect the environment, we are leading to think differently and opening the way to new research for the improvement of the performance of this type of clay soils to reuse them in the construction field. The solidification and stabilization technique is used to improve the properties of poor quality soils to transform them into materials with a suitable performance for a new use in the civil engineering field rather than to excavate them and store them in the discharge area. In our case, the polymerization method is used for bad clay soils classified as high plasticity soil class A4 according to the French standard NF P11-300, where classical treatment methods with cement or lime are not efficient. Our work concerns clay soil treatment study using raw materials as additives for solidification and stabilization. The geopolymers are synthesized by aluminosilicates materials like fly ash, metakaolin, or blast furnace slag and activated by alkaline solution based on sodium hydroxide (NaOH), sodium silicate (Na2SiO3) or a mixture of both of them. In this study, we present the mechanical properties of the soil clay (A4 type) evolution with geopolymerisation methods treatment. Various mix design of aluminosilicates materials and alkaline solutions were carried at different percentages and different curing times of 1, 7, and 28 days. The compressive strength of the untreated clayey soil could be increased from simple to triple. It is observed that the improvement of compressive strength is associated with a geopolymerization mechanism. The highest compressive strength was found with metakaolin at 28 days.Keywords: treatment and valorization of clay-soil, solidification and stabilization, alkali-activation of co-product, geopolymerization
Procedia PDF Downloads 16113693 Validation of Asymptotic Techniques to Predict Bistatic Radar Cross Section
Authors: M. Pienaar, J. W. Odendaal, J. C. Smit, J. Joubert
Abstract:
Simulations are commonly used to predict the bistatic radar cross section (RCS) of military targets since characterization measurements can be expensive and time consuming. It is thus important to accurately predict the bistatic RCS of targets. Computational electromagnetic (CEM) methods can be used for bistatic RCS prediction. CEM methods are divided into full-wave and asymptotic methods. Full-wave methods are numerical approximations to the exact solution of Maxwell’s equations. These methods are very accurate but are computationally very intensive and time consuming. Asymptotic techniques make simplifying assumptions in solving Maxwell's equations and are thus less accurate but require less computational resources and time. Asymptotic techniques can thus be very valuable for the prediction of bistatic RCS of electrically large targets, due to the decreased computational requirements. This study extends previous work by validating the accuracy of asymptotic techniques to predict bistatic RCS through comparison with full-wave simulations as well as measurements. Validation is done with canonical structures as well as complex realistic aircraft models instead of only looking at a complex slicy structure. The slicy structure is a combination of canonical structures, including cylinders, corner reflectors and cubes. Validation is done over large bistatic angles and at different polarizations. Bistatic RCS measurements were conducted in a compact range, at the University of Pretoria, South Africa. The measurements were performed at different polarizations from 2 GHz to 6 GHz. Fixed bistatic angles of β = 30.8°, 45° and 90° were used. The measurements were calibrated with an active calibration target. The EM simulation tool FEKO was used to generate simulated results. The full-wave multi-level fast multipole method (MLFMM) simulated results together with the measured data were used as reference for validation. The accuracy of physical optics (PO) and geometrical optics (GO) was investigated. Differences relating to amplitude, lobing structure and null positions were observed between the asymptotic, full-wave and measured data. PO and GO were more accurate at angles close to the specular scattering directions and the accuracy seemed to decrease as the bistatic angle increased. At large bistatic angles PO did not perform well due to the shadow regions not being treated appropriately. PO also did not perform well for canonical structures where multi-bounce was the main scattering mechanism. PO and GO do not account for diffraction but these inaccuracies tended to decrease as the electrical size of objects increased. It was evident that both asymptotic techniques do not properly account for bistatic structural shadowing. Specular scattering was calculated accurately even if targets did not meet the electrically large criteria. It was evident that the bistatic RCS prediction performance of PO and GO depends on incident angle, frequency, target shape and observation angle. The improved computational efficiency of the asymptotic solvers yields a major advantage over full-wave solvers and measurements; however, there is still much room for improvement of the accuracy of these asymptotic techniques.Keywords: asymptotic techniques, bistatic RCS, geometrical optics, physical optics
Procedia PDF Downloads 25813692 Technology Roadmapping in Defense Industry
Authors: Sevgi Özlem Bulu, Arif Furkan Mendi, Tolga Erol, İzzet Gökhan Özbilgin
Abstract:
The rapid progress of technology in today's competitive conditions has also accelerated companies' technology development activities. As a result, companies are paying more attention to R&D studies and are beginning to allocate a larger share to R&D projects. A more systematic, comprehensive, target-oriented implementation of R&D studies is crucial for the company to achieve successful results. As a consequence, Technology Roadmap (TRM) is gaining importance as a management tool. It has critical prospects for achieving medium and long term success as it contains decisions about past business, future plans, technological infrastructure. When studies on TRM are examined, projects to be placed on the roadmap are selected by many different methods. Generally preferred methods are based on multi-criteria decision making methods. Management of selected projects becomes an important point after the selection phase of the projects. At this stage, TRM are used. TRM can be created in many different ways so that each institution can prepare its own Technology Roadmap according to their strategic plan. Depending on the intended use, there can be TRM with different layers at different sizes. In the evaluation phase of the R&D projects and in the creation of the TRM, HAVELSAN, Turkey's largest defense company in the software field, carries out this process with great care and diligence. At the beginning, suggested R&D projects are evaluated by the Technology Management Board (TMB) of HAVELSAN in accordance with the company's resources, objectives, and targets. These projects are presented to the TMB periodically for evaluation within the framework of certain criteria by board members. After the necessary steps have been passed, the approved projects are added to the time-based TRM, which is composed of four layers as market, product, project and technology. The use of a four-layered roadmap provides a clearer understanding and visualization of company strategy and objectives. This study demonstrates the benefits of using TRM, four-layered Technology Roadmapping and the possibilities for the institutions in the defense industry.Keywords: technology roadmap, research and development project, project selection, research development in defense industry
Procedia PDF Downloads 17913691 Evaluation of Settlement of Coastal Embankments Using Finite Elements Method
Authors: Sina Fadaie, Seyed Abolhassan Naeini
Abstract:
Coastal embankments play an important role in coastal structures by reducing the effect of the wave forces and controlling the movement of sediments. Many coastal areas are underlain by weak and compressible soils. Estimation of during construction settlement of coastal embankments is highly important in design and safety control of embankments and appurtenant structures. Accordingly, selecting and establishing of an appropriate model with a reasonable level of complication is one of the challenges for engineers. Although there are advanced models in the literature regarding design of embankments, there is not enough information on the prediction of their associated settlement, particularly in coastal areas having considerable soft soils. Marine engineering study in Iran is important due to the existence of two important coastal areas located in the northern and southern parts of the country. In the present study, the validity of Terzaghi’s consolidation theory has been investigated. In addition, the settlement of these coastal embankments during construction is predicted by using special methods in PLAXIS software by the help of appropriate boundary conditions and soil layers. The results indicate that, for the existing soil condition at the site, some parameters are important to be considered in analysis. Consequently, a model is introduced to estimate the settlement of the embankments in such geotechnical conditions.Keywords: consolidation, settlement, coastal embankments, numerical methods, finite elements method
Procedia PDF Downloads 15713690 DNpro: A Deep Learning Network Approach to Predicting Protein Stability Changes Induced by Single-Site Mutations
Authors: Xiao Zhou, Jianlin Cheng
Abstract:
A single amino acid mutation can have a significant impact on the stability of protein structure. Thus, the prediction of protein stability change induced by single site mutations is critical and useful for studying protein function and structure. Here, we presented a deep learning network with the dropout technique for predicting protein stability changes upon single amino acid substitution. While using only protein sequence as input, the overall prediction accuracy of the method on a standard benchmark is >85%, which is higher than existing sequence-based methods and is comparable to the methods that use not only protein sequence but also tertiary structure, pH value and temperature. The results demonstrate that deep learning is a promising technique for protein stability prediction. The good performance of this sequence-based method makes it a valuable tool for predicting the impact of mutations on most proteins whose experimental structures are not available. Both the downloadable software package and the user-friendly web server (DNpro) that implement the method for predicting protein stability changes induced by amino acid mutations are freely available for the community to use.Keywords: bioinformatics, deep learning, protein stability prediction, biological data mining
Procedia PDF Downloads 46813689 Green Extraction Processes for the Recovery of Polyphenols from Solid Wastes of Olive Oil Industry
Authors: Theodora-Venetia Missirli, Konstantina Kyriakopoulou, Magdalini Krokida
Abstract:
Olive mill solid waste is an olive oil mill industry by-product with high phenolic, lipid and organic acid concentrations that can be used as a low cost source of natural antioxidants. In this study, extracts of Olea europaea (olive tree) solid olive mill waste (SOMW) were evaluated in terms of their antiradical activity and total phenolic compounds concentrations, such as oleuropein, hydroxytyrosol etc. SOMW samples were subjected to drying prior to extraction as a pretreatment step. Two drying processes, accelerated solar drying (ASD) and air-drying (AD) (at 35, 50, 70°C constant air velocity of 1 m/s), were applied. Subsequently, three different extraction methods were employed to recover extracts from untreated and dried SOMW samples. The methods include the green Microwave Assisted (MAE) and Ultrasound Assisted Extraction (UAE) and the conventional Soxhlet extraction (SE), using water and methanol as solvents. The efficiency and selectivity of the processes were evaluated in terms of extraction yield. The antioxidant activity (AAR) and the total phenolic content (TPC) of the extracts were evaluated using the DPPH assay and the Folin-Ciocalteu method, respectively. The results showed that bioactive content was significantly affected by the extraction technique and the solvent. Specifically, untreated SOMW samples showed higher performance in the yield for all solvents and higher antioxidant potential and phenolic content in the case of water. UAE extraction method showed greater extraction yields than the MAE method for both untreated and dried leaves regardless of the solvent used. The use of ultrasound and microwave assisted extraction in combination with industrially applied drying methods, such as air and solar drying, was feasible and effective for the recovery of bioactive compounds.Keywords: antioxidant potential, drying treatment, olive mill pomace, microwave assisted extraction, ultrasound assisted extraction
Procedia PDF Downloads 30413688 Forecasting Future Society to Explore Promising Security Technologies
Authors: Jeonghwan Jeon, Mintak Han, Youngjun Kim
Abstract:
Due to the rapid development of information and communication technology (ICT), a substantial transformation is currently happening in the society. As the range of intelligent technologies and services is continuously expanding, ‘things’ are becoming capable of communicating one another and even with people. However, such “Internet of Things” has the technical weakness so that a great amount of such information transferred in real-time may be widely exposed to the threat of security. User’s personal data are a typical example which is faced with a serious security threat. The threats of security will be diversified and arose more frequently because next generation of unfamiliar technology develops. Moreover, as the society is becoming increasingly complex, security vulnerability will be increased as well. In the existing literature, a considerable number of private and public reports that forecast future society have been published as a precedent step of the selection of future technology and the establishment of strategies for competitiveness. Although there are previous studies that forecast security technology, they have focused only on technical issues and overlooked the interrelationships between security technology and social factors are. Therefore, investigations of security threats in the future and security technology that is able to protect people from various threats are required. In response, this study aims to derive potential security threats associated with the development of technology and to explore the security technology that can protect against them. To do this, first of all, private and public reports that forecast future and online documents from technology-related communities are collected. By analyzing the data, future issues are extracted and categorized in terms of STEEP (Society, Technology, Economy, Environment, and Politics), as well as security. Second, the components of potential security threats are developed based on classified future issues. Then, points that the security threats may occur –for example, mobile payment system based on a finger scan technology– are identified. Lastly, alternatives that prevent potential security threats are proposed by matching security threats with points and investigating related security technologies from patent data. Proposed approach can identify the ICT-related latent security menaces and provide the guidelines in the ‘problem – alternative’ form by linking the threat point with security technologies.Keywords: future society, information and communication technology, security technology, technology forecasting
Procedia PDF Downloads 46813687 Time Organization for Decongesting Urban Mobility: New Methodology Identifying People's Behavior
Authors: Yassamina Berkane, Leila Kloul, Yoann Demoli
Abstract:
Quality of life, environmental impact, congestion of mobility means, and infrastructures remain significant challenges for urban mobility. Solutions like car sharing, spatial redesign, eCommerce, and autonomous vehicles will likely increase the unit veh-km and the density of cars in urban traffic, thus reducing congestion. However, the impact of such solutions is not clear for researchers. Congestion arises from growing populations that must travel greater distances to arrive at similar locations (e.g., workplaces, schools) during the same time frame (e.g., rush hours). This paper first reviews the research and application cases of urban congestion methods through recent years. Rethinking the question of time, it then investigates people’s willingness and flexibility to adapt their arrival and departure times from workplaces. We use neural networks and methods of supervised learning to apply a new methodology for predicting peoples' intentions from their responses in a questionnaire. We created and distributed a questionnaire to more than 50 companies in the Paris suburb. Obtained results illustrate that our methodology can predict peoples' intentions to reschedule their activities (work, study, commerce, etc.).Keywords: urban mobility, decongestion, machine learning, neural network
Procedia PDF Downloads 19413686 Improvement of the Q-System Using the Rock Engineering System: A Case Study of Water Conveyor Tunnel of Azad Dam
Authors: Sahand Golmohammadi, Sana Hosseini Shirazi
Abstract:
Because the status and mechanical parameters of discontinuities in the rock mass are included in the calculations, various methods of rock engineering classification are often used as a starting point for the design of different types of structures. The Q-system is one of the most frequently used methods for stability analysis and determination of support systems of underground structures in rock, including tunnel. In this method, six main parameters of the rock mass, namely, the rock quality designation (RQD), joint set number (Jn), joint roughness number (Jr), joint alteration number (Ja), joint water parameter (Jw) and stress reduction factor (SRF) are required. In this regard, in order to achieve a reasonable and optimal design, identifying the effective parameters for the stability of the mentioned structures is one of the most important goals and the most necessary actions in rock engineering. Therefore, it is necessary to study the relationships between the parameters of a system and how they interact with each other and, ultimately, the whole system. In this research, it has attempted to determine the most effective parameters (key parameters) from the six parameters of rock mass in the Q-system using the rock engineering system (RES) method to improve the relationships between the parameters in the calculation of the Q value. The RES system is, in fact, a method by which one can determine the degree of cause and effect of a system's parameters by making an interaction matrix. In this research, the geomechanical data collected from the water conveyor tunnel of Azad Dam were used to make the interaction matrix of the Q-system. For this purpose, instead of using the conventional methods that are always accompanied by defects such as uncertainty, the Q-system interaction matrix is coded using a technique that is actually a statistical analysis of the data and determining the correlation coefficient between them. So, the effect of each parameter on the system is evaluated with greater certainty. The results of this study show that the formed interaction matrix provides a reasonable estimate of the effective parameters in the Q-system. Among the six parameters of the Q-system, the SRF and Jr parameters have the maximum and minimum impact on the system, respectively, and also the RQD and Jw parameters have the maximum and minimum impact on the system, respectively. Therefore, by developing this method, we can obtain a more accurate relation to the rock mass classification by weighting the required parameters in the Q-system.Keywords: Q-system, rock engineering system, statistical analysis, rock mass, tunnel
Procedia PDF Downloads 7313685 Development of Hydrodynamic Drag Calculation and Cavity Shape Generation for Supercavitating Torpedoes
Authors: Sertac Arslan, Sezer Kefeli
Abstract:
In this paper, firstly supercavitating phenomenon and supercavity shape design parameters are explained and then drag force calculation methods of high speed supercavitating torpedoes are investigated with numerical techniques and verified with empirical studies. In order to reach huge speeds such as 200, 300 knots for underwater vehicles, hydrodynamic hull drag force which is proportional to density of water (ρ) and square of speed should be reduced. Conventional heavy weight torpedoes could reach up to ~50 knots by classic underwater hydrodynamic techniques. However, to exceed 50 knots and reach about 200 knots speeds, hydrodynamic viscous forces must be reduced or eliminated completely. This requirement revives supercavitation phenomena that could be implemented to conventional torpedoes. Supercavitation is the use of cavitation effects to create a gas bubble, allowing the torpedo to move at huge speed through the water by being fully developed cavitation bubble. When the torpedo moves in a cavitation envelope due to cavitator in nose section and solid fuel rocket engine in rear section, this kind of torpedoes could be entitled as Supercavitating Torpedoes. There are two types of cavitation; first one is natural cavitation, and second one is ventilated cavitation. In this study, disk cavitator is modeled with natural cavitation and supercavitation phenomenon parameters are studied. Moreover, drag force calculation is performed for disk shape cavitator with numerical techniques and compared via empirical studies. Drag forces are calculated with computational fluid dynamics methods and different empirical methods. Numerical calculation method is developed by comparing with empirical results. In verification study cavitation number (σ), drag coefficient (CD) and drag force (D), cavity wall velocity (UKeywords: cavity envelope, CFD, high speed underwater vehicles, supercavitation, supercavity flows
Procedia PDF Downloads 18813684 Dematerialized Beings in Katherine Dunn's Geek Love: A Corporeal and Ethical Study under Posthumanities
Authors: Anum Javed
Abstract:
This study identifies the dynamical image of human body that continues its metamorphosis in the virtual field of reality. It calls attention to the ways where humans start co-evolving with other life forms; technology in particular and are striving to establish a realm outside the physical framework of matter. The problem exceeds the area of technological ethics by explicably and explanatorily entering the space of literary texts and criticism. Textual analysis of Geek Love (1989) by Katherine Dunn is adjoined with posthumanist perspectives of Pramod K. Nayar to beget psycho-somatic changes in man’s nature of being. It uncovers the meaning people give to their experiences in this budding social and cultural phenomena of material representation tied up with personal practices and technological innovations. It also observes an ethical, physical and psychological reassessment of man within the context of technological evolutions. The study indicates the elements that have rendered morphological freedom and new materialism in man’s consciousness. Moreover this work is inquisitive of what it means to be a human in this time of accelerating change where surgeries, implants, extensions, cloning and robotics have shaped a new sense of being. It attempts to go beyond individual’s body image and explores how objectifying media and culture have influenced people’s judgement of others on new material grounds. It further argues a decentring of the glorified image of man as an independent entity because of his energetic partnership with intelligent machines and external agents. The history of the future progress of technology is also mentioned. The methodology adopted is posthumanist techno-ethical textual analysis. This work necessitates a negotiating relationship between man and technology in order to achieve harmonic and balanced interconnected existence. The study concludes by recommending a call for an ethical set of codes to be cultivated for the techno-human habituation. Posthumanism ushers a strong need of adopting new ethics within the terminology of neo-materialist humanism.Keywords: corporeality, dematerialism, human ethos, posthumanism
Procedia PDF Downloads 14713683 Personalizing Human Physical Life Routines Recognition over Cloud-based Sensor Data via AI and Machine Learning
Authors: Kaushik Sathupadi, Sandesh Achar
Abstract:
Pervasive computing is a growing research field that aims to acknowledge human physical life routines (HPLR) based on body-worn sensors such as MEMS sensors-based technologies. The use of these technologies for human activity recognition is progressively increasing. On the other hand, personalizing human life routines using numerous machine-learning techniques has always been an intriguing topic. In contrast, various methods have demonstrated the ability to recognize basic movement patterns. However, it still needs to be improved to anticipate the dynamics of human living patterns. This study introduces state-of-the-art techniques for recognizing static and dy-namic patterns and forecasting those challenging activities from multi-fused sensors. Further-more, numerous MEMS signals are extracted from one self-annotated IM-WSHA dataset and two benchmarked datasets. First, we acquired raw data is filtered with z-normalization and denoiser methods. Then, we adopted statistical, local binary pattern, auto-regressive model, and intrinsic time scale decomposition major features for feature extraction from different domains. Next, the acquired features are optimized using maximum relevance and minimum redundancy (mRMR). Finally, the artificial neural network is applied to analyze the whole system's performance. As a result, we attained a 90.27% recognition rate for the self-annotated dataset, while the HARTH and KU-HAR achieved 83% on nine living activities and 90.94% on 18 static and dynamic routines. Thus, the proposed HPLR system outperformed other state-of-the-art systems when evaluated with other methods in the literature.Keywords: artificial intelligence, machine learning, gait analysis, local binary pattern (LBP), statistical features, micro-electro-mechanical systems (MEMS), maximum relevance and minimum re-dundancy (MRMR)
Procedia PDF Downloads 2013682 Geospatial Network Analysis Using Particle Swarm Optimization
Authors: Varun Singh, Mainak Bandyopadhyay, Maharana Pratap Singh
Abstract:
The shortest path (SP) problem concerns with finding the shortest path from a specific origin to a specified destination in a given network while minimizing the total cost associated with the path. This problem has widespread applications. Important applications of the SP problem include vehicle routing in transportation systems particularly in the field of in-vehicle Route Guidance System (RGS) and traffic assignment problem (in transportation planning). Well known applications of evolutionary methods like Genetic Algorithms (GA), Ant Colony Optimization, Particle Swarm Optimization (PSO) have come up to solve complex optimization problems to overcome the shortcomings of existing shortest path analysis methods. It has been reported by various researchers that PSO performs better than other evolutionary optimization algorithms in terms of success rate and solution quality. Further Geographic Information Systems (GIS) have emerged as key information systems for geospatial data analysis and visualization. This research paper is focused towards the application of PSO for solving the shortest path problem between multiple points of interest (POI) based on spatial data of Allahabad City and traffic speed data collected using GPS. Geovisualization of results of analysis is carried out in GIS.Keywords: particle swarm optimization, GIS, traffic data, outliers
Procedia PDF Downloads 48313681 Tracking Filtering Algorithm Based on ConvLSTM
Authors: Ailing Yang, Penghan Song, Aihua Cai
Abstract:
The nonlinear maneuvering target tracking problem is mainly a state estimation problem when the target motion model is uncertain. Traditional solutions include Kalman filtering based on Bayesian filtering framework and extended Kalman filtering. However, these methods need prior knowledge such as kinematics model and state system distribution, and their performance is poor in state estimation of nonprior complex dynamic systems. Therefore, in view of the problems existing in traditional algorithms, a convolution LSTM target state estimation (SAConvLSTM-SE) algorithm based on Self-Attention memory (SAM) is proposed to learn the historical motion state of the target and the error distribution information measured at the current time. The measured track point data of airborne radar are processed into data sets. After supervised training, the data-driven deep neural network based on SAConvLSTM can directly obtain the target state at the next moment. Through experiments on two different maneuvering targets, we find that the network has stronger robustness and better tracking accuracy than the existing tracking methods.Keywords: maneuvering target, state estimation, Kalman filter, LSTM, self-attention
Procedia PDF Downloads 17713680 Crack Width Analysis of Reinforced Concrete Members under Shrinkage Effect by Pseudo-Discrete Crack Model
Authors: F. J. Ma, A. K. H. Kwan
Abstract:
Crack caused by shrinkage movement of concrete is a serious problem especially when restraint is provided. It may cause severe serviceability and durability problems. The existing prediction methods for crack width of concrete due to shrinkage movement are mainly numerical methods under simplified circumstances, which do not agree with each other. To get a more unified prediction method applicable to more sophisticated circumstances, finite element crack width analysis for shrinkage effect should be developed. However, no existing finite element analysis can be carried out to predict the crack width of concrete due to shrinkage movement because of unsolved reasons of conventional finite element analysis. In this paper, crack width analysis implemented by finite element analysis is presented with pseudo-discrete crack model, which combines traditional smeared crack model and newly proposed crack queuing algorithm. The proposed pseudo-discrete crack model is capable of simulating separate and single crack without adopting discrete crack element. And the improved finite element analysis can successfully simulate the stress redistribution when concrete is cracked, which is crucial for predicting crack width, crack spacing and crack number.Keywords: crack queuing algorithm, crack width analysis, finite element analysis, shrinkage effect
Procedia PDF Downloads 41913679 Pressure-Robust Approximation for the Rotational Fluid Flow Problems
Authors: Medine Demir, Volker John
Abstract:
Fluid equations in a rotating frame of reference have a broad class of important applications in meteorology and oceanography, especially in the large-scale flows considered in ocean and atmosphere, as well as many physical and industrial applications. The Coriolis and the centripetal forces, resulting from the rotation of the earth, play a crucial role in such systems. For such applications it may be required to solve the system in complex three-dimensional geometries. In recent years, the Navier--Stokes equations in a rotating frame have been investigated in a number of papers using the classical inf-sup stable mixed methods, like Taylor-Hood pairs, to contribute to the analysis and the accurate and efficient numerical simulation. Numerical analysis reveals that these classical methods introduce a pressure-dependent contribution in the velocity error bounds that is proportional to some inverse power of the viscosity. Hence, these methods are optimally convergent but small velocity errors might not be achieved for complicated pressures and small viscosity coefficients. Several approaches have been proposed for improving the pressure-robustness of pairs of finite element spaces. In this contribution, a pressure-robust space discretization of the incompressible Navier--Stokes equations in a rotating frame of reference is considered. The discretization employs divergence-free, $H^1$-conforming mixed finite element methods like Scott--Vogelius pairs. However, this approach might come with a modification of the meshes, like the use of barycentric-refined grids in case of Scott--Vogelius pairs. However, this strategy requires the finite element code to have control on the mesh generator which is not realistic in many engineering applications and might also be in conflict with the solver for the linear system. An error estimate for the velocity is derived that tracks the dependency of the error bound on the coefficients of the problem, in particular on the angular velocity. Numerical examples illustrate the theoretical results. The idea of pressure-robust method could be cast on different types of flow problems which would be considered as future studies. As another future research direction, to avoid a modification of the mesh, one may use a very simple parameter-dependent modification of the Scott-Vogelius element, the pressure-wired Stokes element, such that the inf-sup constant is independent of nearly-singular vertices.Keywords: navier-stokes equations in a rotating frame of refence, coriolis force, pressure-robust error estimate, scott-vogelius pairs of finite element spaces
Procedia PDF Downloads 6713678 Philosophical Foundations of Education at the Kazakh Languages by Aiding Communicative Methods
Authors: Duisenova Marzhan
Abstract:
This paper considers the looking from a philosophical point of view the interactive technology and tiered developing Kazakh language teaching primary school pupils through the method of linguistic communication, content and teaching methods formed in the education system. The values determined by the formation of new practical ways that could lead to a novel qualitative level and solving the problem. In the formation of the communicative competence of elementary school students would be to pay attention to other competencies. It helps to understand the motives and needs socialization of students, the development of their cognitive abilities and participate in language relations arising from different situations. Communicative competence is the potential of its own in pupils creative language activity. In this article, the Kazakh language teaching in primary school communicative method is presented. The purpose of learning communicative method, personal development, effective psychological development of the child, himself-education, expansion and growth of language skills and vocabulary, socialization of children, the adoption of the laws of life in the social environment, analyzed the development of vocabulary richness of the language that forms the erudition to ensure continued improvement of education of the child.Keywords: communicative, culture, training, process, method, primary, competence
Procedia PDF Downloads 33913677 Development and Automation of Medium-Scale NFT Hydroponic Systems: Design Methodology and State of the Art Review
Authors: Oscar Armando González-Marin, Jhon F. Rodríguez-León, Oscar Mota-Pérez, Jorge Pineda-Piñón, Roberto S. Velázquez-González., Julio C. Sosa-Savedra
Abstract:
Over the past six years, the World Meteorological Organization (WMO) has recorded the warmest years since 1880, primarily attributed to climate change. In addition, the overexploitation of agricultural lands, combined with food and water scarcity, has highlighted the urgent need for sustainable cultivation methods. Hydroponics has emerged as a sustainable farming technique that enables plant cultivation using nutrient solutions without the requirement for traditional soil. Among hydroponic methods, the Nutrient Film Technique (NFT) facilitates plant growth by circulating a nutrient solution continuously. This approach allows the monitoring and precise control of nutritional parameters, with potential for automation and technological integration. This study aims to present the state of the art of automated NFT hydroponic systems, discussing their design methodologies and considerations for implementation. Moreover, a medium-scale NFT system developed at CICATA-QRO is introduced, detailing its current manual management and progress toward automation.Keywords: automation, hydroponics, nutrient film technique, sustainability
Procedia PDF Downloads 3913676 Introduction of an Approach of Complex Virtual Devices to Achieve Device Interoperability in Smart Building Systems
Authors: Thomas Meier
Abstract:
One of the major challenges for sustainable smart building systems is to support device interoperability, i.e. connecting sensor or actuator devices from different vendors, and present their functionality to the external applications. Furthermore, smart building systems are supposed to connect with devices that are not available yet, i.e. devices that become available on the market sometime later. It is of vital importance that a sustainable smart building platform provides an appropriate external interface that can be leveraged by external applications and smart services. An external platform interface must be stable and independent of specific devices and should support flexible and scalable usage scenarios. A typical approach applied in smart home systems is based on a generic device interface used within the smart building platform. Device functions, even of rather complex devices, are mapped to that generic base type interface by means of specific device drivers. Our new approach, presented in this work, extends that approach by using the smart building system’s rule engine to create complex virtual devices that can represent the most diverse properties of real devices. We examined and evaluated both approaches by means of a practical case study using a smart building system that we have developed. We show that the solution we present allows the highest degree of flexibility without affecting external application interface stability and scalability. In contrast to other systems our approach supports complex virtual device configuration on application layer (e.g. by administration users) instead of device configuration at platform layer (e.g. platform operators). Based on our work, we can show that our approach supports almost arbitrarily flexible use case scenarios without affecting the external application interface stability. However, the cost of this approach is additional appropriate configuration overhead and additional resource consumption at the IoT platform level that must be considered by platform operators. We conclude that the concept of complex virtual devices presented in this work can be applied to improve the usability and device interoperability of sustainable intelligent building systems significantly.Keywords: Internet of Things, smart building, device interoperability, device integration, smart home
Procedia PDF Downloads 27113675 Blind Channel Estimation for Frequency Hopping System Using Subspace Based Method
Authors: M. M. Qasaymeh, M. A. Khodeir
Abstract:
Subspace channel estimation methods have been studied widely. It depends on subspace decomposition of the covariance matrix to separate signal subspace from noise subspace. The decomposition normally is done by either Eigenvalue Decomposition (EVD) or Singular Value Decomposition (SVD) of the Auto-Correlation matrix (ACM). However, the subspace decomposition process is computationally expensive. In this paper, the multipath channel estimation problem for a Slow Frequency Hopping (SFH) system using noise space based method is considered. An efficient method to estimate multipath the time delays basically is proposed, by applying MUltiple Signal Classification (MUSIC) algorithm which used the null space extracted by the Rank Revealing LU factorization (RRLU). The RRLU provides accurate information about the rank and the numerical null space which make it a valuable tool in numerical linear algebra. The proposed novel method decreases the computational complexity approximately to the half compared with RRQR methods keeping the same performance. Computer simulations are also included to demonstrate the effectiveness of the proposed scheme.Keywords: frequency hopping, channel model, time delay estimation, RRLU, RRQR, MUSIC, LS-ESPRIT
Procedia PDF Downloads 41013674 Performance Prediction of a SANDIA 17-m Vertical Axis Wind Turbine Using Improved Double Multiple Streamtube
Authors: Abolfazl Hosseinkhani, Sepehr Sanaye
Abstract:
Different approaches have been used to predict the performance of the vertical axis wind turbines (VAWT), such as experimental, computational fluid dynamics (CFD), and analytical methods. Analytical methods, such as momentum models that use streamtubes, have low computational cost and sufficient accuracy. The double multiple streamtube (DMST) is one of the most commonly used of momentum models, which divide the rotor plane of VAWT into upwind and downwind. In fact, results from the DMST method have shown some discrepancy compared with experiment results; that is because the Darrieus turbine is a complex and aerodynamically unsteady configuration. In this study, analytical-experimental-based corrections, including dynamic stall, streamtube expansion, and finite blade length correction are used to improve the DMST method. Results indicated that using these corrections for a SANDIA 17-m VAWT will lead to improving the results of DMST.Keywords: vertical axis wind turbine, analytical, double multiple streamtube, streamtube expansion model, dynamic stall model, finite blade length correction
Procedia PDF Downloads 13513673 Analysis of Public Space Usage Characteristics Based on Computer Vision Technology - Taking Shaping Park as an Example
Authors: Guantao Bai
Abstract:
Public space is an indispensable and important component of the urban built environment. How to more accurately evaluate the usage characteristics of public space can help improve its spatial quality. Compared to traditional survey methods, computer vision technology based on deep learning has advantages such as dynamic observation and low cost. This study takes the public space of Shaping Park as an example and, based on deep learning computer vision technology, processes and analyzes the image data of the public space to obtain the spatial usage characteristics and spatiotemporal characteristics of the public space. Research has found that the spontaneous activity time in public spaces is relatively random with a relatively short average activity time, while social activities have a relatively stable activity time with a longer average activity time. Computer vision technology based on deep learning can effectively describe the spatial usage characteristics of the research area, making up for the shortcomings of traditional research methods and providing relevant support for creating a good public space.Keywords: computer vision, deep learning, public spaces, using features
Procedia PDF Downloads 7013672 Geochemistry of Nutrients in the South Lagoon of Tunis, Northeast of Tunisia, Using Multivariable Methods
Authors: Abidi Myriam, Ben Amor Rim, Gueddari Moncef
Abstract:
Understanding ecosystem response to the restoration project is essential to assess its rehabilitation. Indeed, the time elapsed after restoration is a critical indicator to shows the real of the restoration success. In this order, the south lagoon of Tunis, a shallow Mediterranean coastal area, has witnessed several pollutions. To resolve this environmental problem, a large restoration project of the lagoon was undertaken. In this restoration works, the main changes are the decrease of the residence time of the lagoon water and the nutrient concentrations. In this paper, we attempt to evaluate the trophic state of lagoon water for evaluating the risk of eutrophication after almost 16 years of its restoration. To attend this objectives water quality monitoring was untaken. In order to identify and to analyze the natural and anthropogenic factor governing the nutrients concentrations of lagoon water geochemical methods and multivariate statistical tools were used. Results show that nutrients have duel sources due to the discharge of municipal wastewater of Megrine City in the south side of the lagoon. The Carlson index shows that the South lagoon of Tunis Lagoon Tunis is eutrophic, and may show limited summer anoxia.Keywords: geochemistry, nutrients, statistical analysis, the south lagoon of Tunis, trophic state
Procedia PDF Downloads 18713671 Translanguaging and Cross-languages Analyses in Writing and Oral Production with Multilinguals: a Systematic Review
Authors: Maryvone Cunha de Morais, Lilian Cristine Hübner
Abstract:
Based on a translanguaging theoretical approach, which considers language not as separate entities but as an entire repertoire available to bilingual individuals, this systematic review aimed at analyzing the methods (aims, samples investigated, type of stimuli, and analyses) adopted by studies on translanguaging practices associated with written and oral tasks (separately or integrated) in bilingual education. The PRISMA criteria for systematic reviews were adopted, with the descriptors "translanguaging", "bilingual education" and/or “written and oral tasks" to search in Pubmed/Medline, Lilacs, Eric, Scopus, PsycINFO, and Web of Science databases for articles published between 2017 and 2021. 280 registers were found, and after following the inclusion/exclusion criteria, 24 articles were considered for this analysis. The results showed that translanguaging practices were investigated on four studies focused on written production analyses, ten focused on oral production analysis, whereas ten studies focused on both written and oral production analyses. The majority of the studies followed a qualitative approach, while five studies have attempted to study translanguaging with quantitative statistical measures. Several types of methods were used to investigate translanguaging practices in written and oral production, with different approaches and tools indicating that the methods are still in development. Moreover, the findings showed that students’ interactions have received significant attention, and studies have been developed not just in language classes in bilingual education, but also including diverse educational and theoretical contexts such as Content and Language Integrated Learning, task repetition, Science classes, collaborative writing, storytelling, peer feedback, Speech Act theory and collective thinking, language ideologies, conversational analysis, and discourse analyses. The studies, whether focused either on writing or oral tasks or in both, have portrayed significant research and pedagogical implications, grounded on the view of integrated languages in bi-and multilinguals.Keywords: bilingual education, oral production, translanguaging, written production
Procedia PDF Downloads 12613670 Numerical Study on Parallel Rear-Spoiler on Super Cars
Authors: Anshul Ashu
Abstract:
Computers are applied to the vehicle aerodynamics in two ways. One of two is Computational Fluid Dynamics (CFD) and other is Computer Aided Flow Visualization (CAFV). Out of two CFD is chosen because it shows the result with computer graphics. The simulation of flow field around the vehicle is one of the important CFD applications. The flow field can be solved numerically using panel methods, k-ε method, and direct simulation methods. The spoiler is the tool in vehicle aerodynamics used to minimize unfavorable aerodynamic effects around the vehicle and the parallel spoiler is set of two spoilers which are designed in such a manner that it could effectively reduce the drag. In this study, the standard k-ε model of the simplified version of Bugatti Veyron, Audi R8 and Porsche 911 are used to simulate the external flow field. Flow simulation is done for variable Reynolds number. The flow simulation consists of three different levels, first over the model without a rear spoiler, second for over model with single rear spoiler, and third over the model with parallel rear-spoiler. The second and third level has following parameter: the shape of the spoiler, the angle of attack and attachment position. A thorough analysis of simulations results has been found. And a new parallel spoiler is designed. It shows a little improvement in vehicle aerodynamics with a decrease in vehicle aerodynamic drag and lift. Hence, it leads to good fuel economy and traction force of the model.Keywords: drag, lift, flow simulation, spoiler
Procedia PDF Downloads 50013669 Securing Online Voting With Blockchain and Smart Contracts
Authors: Anant Mehrotra, Krish Phagwani
Abstract:
Democratic voting is vital for any country, but current methods like ballot papers or EVMs have drawbacks, including transparency issues, low voter turnout, and security concerns. Blockchain technology offers a potential solution by providing a secure, decentralized, and transparent platform for e-voting. With features like immutability, security, and anonymity, blockchain combined with smart contracts can enhance trust and prevent vote tampering. This paper explores an Ethereum-based e-voting application using Solidity, showcasing a web app that prevents duplicate voting through a token-based system, while also discussing the advantages and limitations of blockchain in digital voting. Voting is a crucial component of democratic decision-making, yet current methods, like paper ballots, remain outdated and inefficient. This paper reviews blockchain-based voting systems, highlighting strategies and guidelines to create a comprehensive electronic voting system that leverages cryptographic techniques, such as zero-knowledge proofs, to enhance privacy. It addresses limitations of existing e-voting solutions, including cost, identity management, and scalability, and provides key insights for organizations looking to design their own blockchain-based voting systems.Keywords: electronic voting, smart contracts, blockchain nased voting, security
Procedia PDF Downloads 913668 Ensemble Methods in Machine Learning: An Algorithmic Approach to Derive Distinctive Behaviors of Criminal Activity Applied to the Poaching Domain
Authors: Zachary Blanks, Solomon Sonya
Abstract:
Poaching presents a serious threat to endangered animal species, environment conservations, and human life. Additionally, some poaching activity has even been linked to supplying funds to support terrorist networks elsewhere around the world. Consequently, agencies dedicated to protecting wildlife habitats have a near intractable task of adequately patrolling an entire area (spanning several thousand kilometers) given limited resources, funds, and personnel at their disposal. Thus, agencies need predictive tools that are both high-performing and easily implementable by the user to help in learning how the significant features (e.g. animal population densities, topography, behavior patterns of the criminals within the area, etc) interact with each other in hopes of abating poaching. This research develops a classification model using machine learning algorithms to aid in forecasting future attacks that is both easy to train and performs well when compared to other models. In this research, we demonstrate how data imputation methods (specifically predictive mean matching, gradient boosting, and random forest multiple imputation) can be applied to analyze data and create significant predictions across a varied data set. Specifically, we apply these methods to improve the accuracy of adopted prediction models (Logistic Regression, Support Vector Machine, etc). Finally, we assess the performance of the model and the accuracy of our data imputation methods by learning on a real-world data set constituting four years of imputed data and testing on one year of non-imputed data. This paper provides three main contributions. First, we extend work done by the Teamcore and CREATE (Center for Risk and Economic Analysis of Terrorism Events) research group at the University of Southern California (USC) working in conjunction with the Department of Homeland Security to apply game theory and machine learning algorithms to develop more efficient ways of reducing poaching. This research introduces ensemble methods (Random Forests and Stochastic Gradient Boosting) and applies it to real-world poaching data gathered from the Ugandan rain forest park rangers. Next, we consider the effect of data imputation on both the performance of various algorithms and the general accuracy of the method itself when applied to a dependent variable where a large number of observations are missing. Third, we provide an alternate approach to predict the probability of observing poaching both by season and by month. The results from this research are very promising. We conclude that by using Stochastic Gradient Boosting to predict observations for non-commercial poaching by season, we are able to produce statistically equivalent results while being orders of magnitude faster in computation time and complexity. Additionally, when predicting potential poaching incidents by individual month vice entire seasons, boosting techniques produce a mean area under the curve increase of approximately 3% relative to previous prediction schedules by entire seasons.Keywords: ensemble methods, imputation, machine learning, random forests, statistical analysis, stochastic gradient boosting, wildlife protection
Procedia PDF Downloads 29213667 A Comparative Study of Euglena gracilis Cultivations for Improving Laminaribiose Phosphorylase Production
Authors: Akram Abi, Clarissa Müller, Hans-Joachim Jördening
Abstract:
Laminaribiose is a beta-1,3-glycoside which is used in the medical field for the treatment of dermatitis and also can be used as a building block for new pharmaceutics. The conventional process of laminaribiose production is the uneconomical process of hydrolysis of laminarin extracted from natural polysaccharides of plant origin. A more economical approach however is attainable by enzymatically synthesis of laminaribiose via a reverse phosphorylase reaction catalyzed by laminaribiose phosphorylase (LP) from Euglena gracilis. Different cultivation methods of Euglena gracilis and the effect on LP production have been investigated. Buffered/unbuffered heterotrophic and mixotrophic cultivations of Euglena gracilis has been carried out. Changes of biomass and LP production, glucose level and pH, cell count and shape has been monitored in the course of time. The results obtained from experiments each in three repetitions, show that in the heterotrophic cultivation of Euglena gracilis not only more biomass is produced compared to mixotrophic cultivation, but also higher specific protein concentration is achieved. Furthermore, the LP activity test showed that the protein extracted from heterotrophically cultured cells has a higher LP activity. It was also observed that the cells develop in a distinctive different shape between these two cultures and have different length to width ratios. Taking the heterotrophic culture as the more efficient cultivation method in LP production, another comparative experiment between buffered and unbuffered heterothrophic culture was carried out that showed the unbuffered culture has advantages over the other one in respect of both LP production and resulting activity. A hetrotrophic cultivation of Euglena gracilis in a 5L bioreactor with controlled operating conditions showed a distinctive improvement of all the aspects of culture compared to the shaking flask cultivations. Biomass production was improved from 5 to more than 8 g/l (dry weight) which resulted in a specific protein concentration of 45 g/l in the heterotrophic cultivation in the bioreactor. In further attempts to improve LP production, different purification methods were tested and each method was checks through an activity assay. A laminaribiose yield of 35% was achieved which was by far the highest amount amongst different methods tested.Keywords: euglena gracilis, heterotrophic culture, laminaribiose production, mixotrophic culture
Procedia PDF Downloads 36513666 A Review on Climate Change and Sustainable Agriculture in Southeast Nigeria
Authors: Jane O. Munonye
Abstract:
Climate change has both negative and positive effects in agricultural production. For agriculture to be sustainable in adverse climate change condition, some natural measures are needed. The issue is to produce more food with available natural resources and reduce the contribution of agriculture to climate change. The study reviewed climate change and sustainable agriculture in southeast Nigeria. Data from the study were from secondary sources. Ten scientific papers were consulted and data for the review were collected from three. The objectives of the paper were as follows: to review the effect of climate change on one major arable crop in southeast Nigeria (yam; Dioscorea rotundata); evident of climate change impact and methods for sustainable agricultural production in adverse weather condition. Some climatic parameter as sunshine, relative humidity and rainfall have negative relationship with yam production and significant at 10% probability. Crop production was predicted to decline by 25% per hectare by 2060 while livestock production has increased the incidence of diseases and pathogens as the major effect to agriculture. Methods for sustainable agriculture and damage of natural resources by climate change were highlighted. Agriculture needs to be transformed as climate changes to enable the sector to be sustainable. There should be a policy in place to facilitate the integration of sustainability in Nigeria agriculture.Keywords: agriculture, climate change, sustainability, yam
Procedia PDF Downloads 32613665 Radar Cross Section Modelling of Lossy Dielectrics
Authors: Ciara Pienaar, J. W. Odendaal, J. Joubert, J. C. Smit
Abstract:
Radar cross section (RCS) of dielectric objects play an important role in many applications, such as low observability technology development, drone detection, and monitoring as well as coastal surveillance. Various materials are used to construct the targets of interest such as metal, wood, composite materials, radar absorbent materials, and other dielectrics. Since simulated datasets are increasingly being used to supplement infield measurements, as it is more cost effective and a larger variety of targets can be simulated, it is important to have a high level of confidence in the predicted results. Confidence can be attained through validation. Various computational electromagnetic (CEM) methods are capable of predicting the RCS of dielectric targets. This study will extend previous studies by validating full-wave and asymptotic RCS simulations of dielectric targets with measured data. The paper will provide measured RCS data of a number of canonical dielectric targets exhibiting different material properties. As stated previously, these measurements are used to validate numerous CEM methods. The dielectric properties are accurately characterized to reduce the uncertainties in the simulations. Finally, an analysis of the sensitivity of oblique and normal incidence scattering predictions to material characteristics is also presented. In this paper, the ability of several CEM methods, including method of moments (MoM), and physical optics (PO), to calculate the RCS of dielectrics were validated with measured data. A few dielectrics, exhibiting different material properties, were selected and several canonical targets, such as flat plates and cylinders, were manufactured. The RCS of these dielectric targets were measured in a compact range at the University of Pretoria, South Africa, over a frequency range of 2 to 18 GHz and a 360° azimuth angle sweep. This study also investigated the effect of slight variations in the material properties on the calculated RCS results, by varying the material properties within a realistic tolerance range and comparing the calculated RCS results. Interesting measured and simulated results have been obtained. Large discrepancies were observed between the different methods as well as the measured data. It was also observed that the accuracy of the RCS data of the dielectrics can be frequency and angle dependent. The simulated RCS for some of these materials also exhibit high sensitivity to variations in the material properties. Comparison graphs between the measured and simulation RCS datasets will be presented and the validation thereof will be discussed. Finally, the effect that small tolerances in the material properties have on the calculated RCS results will be shown. Thus the importance of accurate dielectric material properties for validation purposes will be discussed.Keywords: asymptotic, CEM, dielectric scattering, full-wave, measurements, radar cross section, validation
Procedia PDF Downloads 240