Search results for: inquiry- based instruction
23173 Verification of the Supercavitation Phenomena: Investigation of the Cavity Parameters and Drag Coefficients for Different Types of Cavitator
Authors: Sezer Kefeli, Sertaç Arslan
Abstract:
Supercavitation is a pressure dependent process which gives opportunity to eliminate the wetted surface effects on the underwater vehicle due to the differences of viscosity and velocity effects between liquid (freestream) and gas phase. Cavitation process occurs depending on rapid pressure drop or temperature rising in liquid phase. In this paper, pressure based cavitation is investigated due to the fact that is encountered in the underwater world, generally. Basically, this vapor-filled pressure based cavities are unstable and harmful for any underwater vehicle because these cavities (bubbles or voids) lead to intense shock waves while collapsing. On the other hand, supercavitation is a desired and stabilized phenomena than general pressure based cavitation. Supercavitation phenomena offers the idea of minimizing form drag, and thus supercavitating vehicles are revived. When proper circumstances are set up, which are either increasing the operating speed of the underwater vehicle or decreasing the pressure difference between free stream and artificial pressure, the continuity of the supercavitation is obtainable. There are 2 types of supercavitation to obtain stable and continuous supercavitation, and these are called as natural and artificial supercavitation. In order to generate natural supercavitation, various mechanical structures are discovered, which are called as cavitators. In literature, a lot of cavitator types are studied either experimentally or numerically on a CFD platforms with intent to observe natural supercavitation since the 1900s. In this paper, firstly, experimental results are obtained, and trend lines are generated based on supercavitation parameters in terms of cavitation number (), form drag coefficientC_D, dimensionless cavity diameter (d_m/d_c), and length (L_c/d_c). After that, natural cavitation verification studies are carried out for disk and cone shape cavitators. In addition, supercavitation parameters are numerically analyzed at different operating conditions, and CFD results are fitted into trend lines of experimental results. The aims of this paper are to generate one generally accepted drag coefficient equation for disk and cone cavitators at different cavitator half angle and investigation of the supercavitation parameters with respect to cavitation number. Moreover, 165 CFD analysis are performed at different cavitation numbers on FLUENT version 21R2. Five different cavitator types are modeled on SCDM with respect tocavitator’s half angles. After that, CFD database is generated depending on numerical results, and new trend lines are generated based on supercavitation parameters. These trend lines are compared with experimental results. Finally, the generally accepted drag coefficient equation and equations of supercavitation parameters are generated.Keywords: cavity envelope, CFD, high speed underwater vehicles, supercavitation, supercavitating flows, supercavitation parameters, drag reduction, viscous force elimination, natural cavitation verification
Procedia PDF Downloads 13123172 Finding the Optimal Meeting Point Based on Travel Plans in Road Networks
Authors: Mohammad H. Ahmadi, Vahid Haghighatdoost
Abstract:
Given a set of source locations for a group of friends, and a set of trip plans for each group member as a sequence of Categories-of-Interests (COIs) (e.g., restaurant), and finally a specific COI as a common destination that all group members will gather together, in Meeting Point Based on Trip Plans (MPTPs) queries our goal is to find a Point-of-Interest (POI) from different COIs, such that the aggregate travel distance for the group is minimized. In this work, we considered two cases for aggregate function as Sum and Max. For solving this query, we propose an efficient pruning technique for shrinking the search space. Our approach contains three steps. In the first step, it prunes the search space around the source locations. In the second step, it prunes the search space around the centroid of source locations. Finally, we compute the intersection of all pruned areas as the final refined search space. We prove that the POIs beyond the refined area cannot be part of optimal answer set. The paper also covers an extensive performance study of the proposed technique.Keywords: meeting point, trip plans, road networks, spatial databases
Procedia PDF Downloads 18523171 Microgrid Design Under Optimal Control With Batch Reinforcement Learning
Authors: Valentin Père, Mathieu Milhé, Fabien Baillon, Jean-Louis Dirion
Abstract:
Microgrids offer potential solutions to meet the need for local grid stability and increase isolated networks autonomy with the integration of intermittent renewable energy production and storage facilities. In such a context, sizing production and storage for a given network is a complex task, highly depending on input data such as power load profile and renewable resource availability. This work aims at developing an operating cost computation methodology for different microgrid designs based on the use of deep reinforcement learning (RL) algorithms to tackle the optimal operation problem in stochastic environments. RL is a data-based sequential decision control method based on Markov decision processes that enable the consideration of random variables for control at a chosen time scale. Agents trained via RL constitute a promising class of Energy Management Systems (EMS) for the operation of microgrids with energy storage. Microgrid sizing (or design) is generally performed by minimizing investment costs and operational costs arising from the EMS behavior. The latter might include economic aspects (power purchase, facilities aging), social aspects (load curtailment), and ecological aspects (carbon emissions). Sizing variables are related to major constraints on the optimal operation of the network by the EMS. In this work, an islanded mode microgrid is considered. Renewable generation is done with photovoltaic panels; an electrochemical battery ensures short-term electricity storage. The controllable unit is a hydrogen tank that is used as a long-term storage unit. The proposed approach focus on the transfer of agent learning for the near-optimal operating cost approximation with deep RL for each microgrid size. Like most data-based algorithms, the training step in RL leads to important computer time. The objective of this work is thus to study the potential of Batch-Constrained Q-learning (BCQ) for the optimal sizing of microgrids and especially to reduce the computation time of operating cost estimation in several microgrid configurations. BCQ is an off-line RL algorithm that is known to be data efficient and can learn better policies than on-line RL algorithms on the same buffer. The general idea is to use the learned policy of agents trained in similar environments to constitute a buffer. The latter is used to train BCQ, and thus the agent learning can be performed without update during interaction sampling. A comparison between online RL and the presented method is performed based on the score by environment and on the computation time.Keywords: batch-constrained reinforcement learning, control, design, optimal
Procedia PDF Downloads 12323170 Parameter Estimation of Additive Genetic and Unique Environment (AE) Model on Diabetes Mellitus Type 2 Using Bayesian Method
Authors: Andi Darmawan, Dewi Retno Sari Saputro, Purnami Widyaningsih
Abstract:
Diabetes mellitus (DM) is a chronic disease in human that occurred if pancreas cannot produce enough of insulin hormone or the body uses ineffectively insulin hormone which causes increasing level of glucose in the blood, or it was called hyperglycemia. In Indonesia, DM is a serious disease on health because it can cause blindness, kidney disease, diabetic feet (gangrene), and stroke. The type of DM criteria can also be divided based on the main causes; they are DM type 1, type 2, and gestational. Diabetes type 1 or previously known as insulin-independent diabetes is due to a lack of production of insulin hormone. Diabetes type 2 or previously known as non-insulin dependent diabetes is due to ineffective use of insulin while gestational diabetes is a hyperglycemia that found during pregnancy. The most one type commonly found in patient is DM type 2. The main factors of this disease are genetic (A) and life style (E). Those disease with 2 factors can be constructed with additive genetic and unique environment (AE) model. In this article was discussed parameter estimation of AE model using Bayesian method and the inheritance character simulation on parent-offspring. On the AE model, there are response variable, predictor variables, and parameters were capable of representing the number of population on research. The population can be measured through a taken random sample. The response and predictor variables can be determined by sample while the parameters are unknown, so it was required to estimate the parameters based on the sample. Estimation of AE model parameters was obtained based on a joint posterior distribution. The simulation was conducted to get the value of genetic variance and life style variance. The results of simulation are 0.3600 for genetic variance and 0.0899 for life style variance. Therefore, the variance of genetic factor in DM type 2 is greater than life style.Keywords: AE model, Bayesian method, diabetes mellitus type 2, genetic, life style
Procedia PDF Downloads 28423169 Evaluating the Total Costs of a Ransomware-Resilient Architecture for Healthcare Systems
Authors: Sreejith Gopinath, Aspen Olmsted
Abstract:
This paper is based on our previous work that proposed a risk-transference-based architecture for healthcare systems to store sensitive data outside the system boundary, rendering the system unattractive to would-be bad actors. This architecture also allows a compromised system to be abandoned and a new system instance spun up in place to ensure business continuity without paying a ransom or engaging with a bad actor. This paper delves into the details of various attacks we simulated against the prototype system. In the paper, we discuss at length the time and computational costs associated with storing and retrieving data in the prototype system, abandoning a compromised system, and setting up a new instance with existing data. Lastly, we simulate some analytical workloads over the data stored in our specialized data storage system and discuss the time and computational costs associated with running analytics over data in a specialized storage system outside the system boundary. In summary, this paper discusses the total costs of data storage, access, and analytics incurred with the proposed architecture.Keywords: cybersecurity, healthcare, ransomware, resilience, risk transference
Procedia PDF Downloads 13323168 Georgian Social Security System Compatibility with EU Requirements
Authors: Nino Grigolaia
Abstract:
Introduction: The article discusses the experience of the EU in the social field, analyzes the peculiarities of the functioning of the social system in Georgia, and reveals the priority and importance of social policy. Methodology: Different research methods are applied in the presented paper. There are used induction, deduction, analysis, synthesis, analogy, correlation, and statistical observation methodologies in the work. Main Findings: Based on the analysis of social security reforms in Georgia, the main systematic problems are detected, the recommendations on social security system components, integration of the social security field in the unified insurance system, the formation of the national social system, perfection of the legislative, regulatory framework of social protection, adoption of foreign experience are developed in the article. Conclusion: The article concludes that the social protection system in Georgia is at an early stage of development, with the significant impact of factors such as high level of unemployment, low pensions, a large number of families living under the poverty line, and other ones. Accordingly, it is well-established that the study of the social security problem in Georgia is still actual. Based on the analysis, appropriate suggestions in the field of social security are made, and relevant recommendations are proposed.Keywords: social security, social system, social policy, social security models
Procedia PDF Downloads 14723167 Biomechanical Modeling, Simulation, and Comparison of Human Arm Motion to Mitigate Astronaut Task during Extra Vehicular Activity
Authors: B. Vadiraj, S. N. Omkar, B. Kapil Bharadwaj, Yash Vardhan Gupta
Abstract:
During manned exploration of space, missions will require astronaut crewmembers to perform Extra Vehicular Activities (EVAs) for a variety of tasks. These EVAs take place after long periods of operations in space, and in and around unique vehicles, space structures and systems. Considering the remoteness and time spans in which these vehicles will operate, EVA system operations should utilize common worksites, tools and procedures as much as possible to increase the efficiency of training and proficiency in operations. All of the preparations need to be carried out based on studies of astronaut motions. Until now, development and training activities associated with the planned EVAs in Russian and U.S. space programs have relied almost exclusively on physical simulators. These experimental tests are expensive and time consuming. During the past few years a strong increase has been observed in the use of computer simulations due to the fast developments in computer hardware and simulation software. Based on this idea, an effort to develop a computational simulation system to model human dynamic motion for EVA is initiated. This study focuses on the simulation of an astronaut moving the orbital replaceable units into the worksites or removing them from the worksites. Our physics-based methodology helps fill the gap in quantitative analysis of astronaut EVA by providing a multisegment human arm model. Simulation work described in the study improves on the realism of previous efforts, incorporating joint stops to account for the physiological limits of range of motion. To demonstrate the utility of this approach human arm model is simulated virtually using ADAMS/LifeMOD® software. Kinematic mechanism for the astronaut’s task is studied from joint angles and torques. Simulation results obtained is validated with numerical simulation based on the principles of Newton-Euler method. Torques determined using mathematical model are compared among the subjects to know the grace and consistency of the task performed. We conclude that due to uncertain nature of exploration-class EVA, a virtual model developed using multibody dynamics approach offers significant advantages over traditional human modeling approaches.Keywords: extra vehicular activity, biomechanics, inverse kinematics, human body modeling
Procedia PDF Downloads 34223166 Parameter Selection for Computationally Efficient Use of the Bfvrns Fully Homomorphic Encryption Scheme
Authors: Cavidan Yakupoglu, Kurt Rohloff
Abstract:
In this study, we aim to provide a novel parameter selection model for the BFVrns scheme, which is one of the prominent FHE schemes. Parameter selection in lattice-based FHE schemes is a practical challenges for experts or non-experts. Towards a solution to this problem, we introduce a hybrid principles-based approach that combines theoretical with experimental analyses. To begin, we use regression analysis to examine the parameters on the performance and security. The fact that the FHE parameters induce different behaviors on performance, security and Ciphertext Expansion Factor (CEF) that makes the process of parameter selection more challenging. To address this issue, We use a multi-objective optimization algorithm to select the optimum parameter set for performance, CEF and security at the same time. As a result of this optimization, we get an improved parameter set for better performance at a given security level by ensuring correctness and security against lattice attacks by providing at least 128-bit security. Our result enables average ~ 5x smaller CEF and mostly better performance in comparison to the parameter sets given in [1]. This approach can be considered a semiautomated parameter selection. These studies are conducted using the PALISADE homomorphic encryption library, which is a well-known HE library. The abstract goes here.Keywords: lattice cryptography, fully homomorphic encryption, parameter selection, LWE, RLWE
Procedia PDF Downloads 15723165 Unseen Classes: The Paradigm Shift in Machine Learning
Authors: Vani Singhal, Jitendra Parmar, Satyendra Singh Chouhan
Abstract:
Unseen class discovery has now become an important part of a machine-learning algorithm to judge new classes. Unseen classes are the classes on which the machine learning model is not trained on. With the advancement in technology and AI replacing humans, the amount of data has increased to the next level. So while implementing a model on real-world examples, we come across unseen new classes. Our aim is to find the number of unseen classes by using a hierarchical-based active learning algorithm. The algorithm is based on hierarchical clustering as well as active sampling. The number of clusters that we will get in the end will give the number of unseen classes. The total clusters will also contain some clusters that have unseen classes. Instead of first discovering unseen classes and then finding their number, we directly calculated the number by applying the algorithm. The dataset used is for intent classification. The target data is the intent of the corresponding query. We conclude that when the machine learning model will encounter real-world data, it will automatically find the number of unseen classes. In the future, our next work would be to label these unseen classes correctly.Keywords: active sampling, hierarchical clustering, open world learning, unseen class discovery
Procedia PDF Downloads 17223164 Prediction and Analysis of Human Transmembrane Transporter Proteins Based on SCM
Authors: Hui-Ling Huang, Tamara Vasylenko, Phasit Charoenkwan, Shih-Hsiang Chiu, Shinn-Ying Ho
Abstract:
The knowledge of the human transporters is still limited due to technically demanding procedure of crystallization for the structural characterization of transporters by spectroscopic methods. It is desirable to develop bioinformatics tools for effective analysis of available sequences in order to identify human transmembrane transporter proteins (HMTPs). This study proposes a scoring card method (SCM) based method for predicting HMTPs. We estimated a set of propensity scores of dipeptides to be HMTPs using SCM from the training dataset (HTS732) consisting of 366 HMTPs and 366 non-HMTPs. SCM using the estimated propensity scores of 20 amino acids and 400 dipeptides -as HMTPs, has a training accuracy of 87.63% and a test accuracy of 66.46%. The five top-ranked dipeptides include LD, NV, LI, KY, and MN with scores 996, 992, 989, 987, and 985, respectively. Five amino acids with the highest propensity scores are Ile, Phe, Met, Gly, and Leu, that hydrophobic residues are mostly highly-scored. Furthermore, obtained propensity scores were used to analyze physicochemical properties of human transporters.Keywords: dipeptide composition, physicochemical property, human transmembrane transporter proteins, human transmembrane transporters binding propensity, scoring card method
Procedia PDF Downloads 36923163 Engineered Bio-Coal from Pressed Seed Cake for Removal of 2, 4, 6-Trichlorophenol with Parametric Optimization Using Box–Behnken Method
Authors: Harsha Nagar, Vineet Aniya, Alka Kumari, Satyavathi B.
Abstract:
In the present study, engineered bio-coal was produced from pressed seed cake, which otherwise is non-edible in origin. The production process involves a slow pyrolysis wherein, based on the optimization of process parameters; a substantial reduction in H/C and O/C of 77% was achieved with respect to the original ratio of 1.67 and 0.8, respectively. The bio-coal, so the product was found to have a higher heating value of 29899 kJ/kg with surface area 17 m²/g and pore volume of 0.002 cc/g. The functional characterization of bio-coal and its subsequent modification was carried out to enhance its active sites, which were further used as an adsorbent material for removal of 2,4,6-Trichlorophenol (2,4,6-TCP) herbicide from the aqueous stream. The point of zero charge for the bio-coal was found to be pH < 3 where its surface is positively charged and attracts anions resulting in the maximum 2, 4, 6-TCP adsorption at pH 2.0. The parametric optimization of the adsorption process was studied based on the Box-Behken design with the desirability approach. The results showed optimum values of adsorption efficiency of 74.04% and uptake capacity of 118.336 mg/g for an initial metal concentration of 250 mg/l and particle size of 0.12 mm at pH 2.0 and 1 g/L of bio-coal loading. Negative Gibbs free energy change values indicated the feasibility of 2,4,6-TCP adsorption on biochar. Decreasing the ΔG values with the rise in temperature indicated high favourability at low temperatures. The equilibrium modeling results showed that both isotherms (Langmuir and Freundlich) accurately predicted the equilibrium data, which may be attributed to the different affinity of the functional groups of bio-coal for 2,4,6-TCP removal. The possible mechanism for 2,4,6-TCP adsorption is found to be physisorption (pore diffusion, p*_p electron donor-acceptor interaction, H-bonding, and van der Waals dispersion forces) and chemisorption (phenolic and amine groups chemical bonding) based on the kinetics data modeling.Keywords: engineered biocoal, 2, 4, 6-trichlorophenol, box behnken design, biosorption
Procedia PDF Downloads 11723162 Recreation and Environmental Quality of Tropical Wetlands: A Social Media Based Spatial Analysis
Authors: Michael Sinclair, Andrea Ghermandi, Sheela A. Moses, Joseph Sabu
Abstract:
Passively crowdsourced data, such as geotagged photographs from social media, represent an opportunistic source of location-based and time-specific behavioral data for ecosystem services analysis. Such data have innovative applications for environmental management and protection, which are replicable at wide spatial scales and in the context of both developed and developing countries. Here we test one such innovation, based on the analysis of the metadata of online geotagged photographs, to investigate the provision of recreational services by the entire network of wetland ecosystems in the state of Kerala, India. We estimate visitation to individual wetlands state-wide and extend, for the first time to a developing region, the emerging application of cultural ecosystem services modelling using data from social media. The impacts of restoration of wetland areal extension and water quality improvement are explored as a means to inform more sustainable management strategies. Findings show that improving water quality to a level suitable for the preservation of wildlife and fisheries could increase annual visits by 350,000, an increase of 13% in wetland visits state-wide, while restoring previously encroached wetland area could result in a 7% increase in annual visits, corresponding to 49,000 visitors, in the Ashtamudi and Vembanad lakes alone, two large coastal Ramsar wetlands in Kerala. We discuss how passive crowdsourcing of social media data has the potential to improve current ecosystem service analyses and environmental management practices also in the context of developing countries.Keywords: coastal wetlands, cultural ecosystem services, India, passive crowdsourcing, social media, wetland restoration
Procedia PDF Downloads 15623161 Examining Relationship between Resource-Curse and Under-Five Mortality in Resource-Rich Countries
Authors: Aytakin Huseynli
Abstract:
The paper reports findings of the study which examined under-five mortality rate among resource-rich countries. Typically when countries obtain wealth citizens gain increased wellbeing. Societies with new wealth create equal opportunities for everyone including vulnerable groups. But scholars claim that this is not the case for developing resource-rich countries and natural resources become the curse for them rather than the blessing. Spillovers from natural resource curse affect the social wellbeing of vulnerable people negatively. They get excluded from the mainstream society, and their situation becomes tangible. In order to test this hypothesis, the study compared under-5 mortality rate among resource-rich countries by using independent sample one-way ANOVA. The data on under-five mortality rate came from the World Bank. The natural resources for this study are oil, gas and minerals. The list of 67 resource-rich countries was taken from Natural Resource Governance Institute. The sample size was categorized and 4 groups were created such as low, low-middle, upper middle and high-income countries based on income classification of the World Bank. Results revealed that there was a significant difference in the scores for low, middle, upper-middle and high-income countries in under-five mortality rate (F(3(29.01)=33.70, p=.000). To find out the difference among income groups, the Games-Howell test was performed and it was found that infant mortality was an issue for low, middle and upper middle countries but not for high-income countries. Results of this study are in agreement with previous research on resource curse and negative effects of resource-based development. Policy implications of the study for social workers, policy makers, academicians and social development specialists are to raise and discuss issues of marginalization and exclusion of vulnerable groups in developing resource-rich countries and suggest interventions for avoiding them.Keywords: children, natural resource, extractive industries, resource-based development, vulnerable groups
Procedia PDF Downloads 25423160 Investigation of Green Dye-Sensitized Solar Cells Based on Natural Dyes
Authors: M. Hosseinnezhad, K. Gharanjig
Abstract:
Natural dyes, extracted from black carrot and bramble, were utilized as photosensitizers to prepare dye-sensitized solar cells (DSSCs). Spectrophotometric studies of the natural dyes in solution and on a titanium dioxide substrate were carried out in order to assess changes in the status of the dyes. The results show that the bathochromic shift is seen on the photo-electrode substrate. The chemical binding of the natural dyes at the surface photo-electrode were increased by the chelating effect of the Ti(IV) ions. The cyclic voltammetry results showed that all extracts are suitable to be performed in DSSCs. Finally, photochemical performance and stability of DSSCs based on natural dyes were studied. The DSSCs sensitized by black carrot extract have been reported to achieve up to Jsc=1.17 mAcm-2, Voc= 0.55 V, FF= 0.52, η=0.34%, whereas Bramble extract can obtain up to Jsc=2.24 mAcm-2, Voc= 0.54 V, FF= 0.57, η=0.71%. The power conversion efficiency was obtained from the mixed dyes in DSSCs. The power conversion efficiency of dye-sensitized solar cells using mixed Black carrot and Bramble dye is the average of the their efficiency in single DSSCs.Keywords: anthocyanin, dye-sensitized solar cells, green energy, optical materials
Procedia PDF Downloads 24523159 Energy-Efficient Internet of Things Communications: A Comparative Study of Long-Term Evolution for Machines and Narrowband Internet of Things Technologies
Authors: Nassim Labdaoui, Fabienne Nouvel, Stéphane Dutertre
Abstract:
The Internet of Things (IoT) is emerging as a crucial communication technology for the future. Many solutions have been proposed, and among them, licensed operators have put forward LTE-M and NB-IoT. However, implementing these technologies requires a good understanding of the device energy requirements, which can vary depending on the coverage conditions. In this paper, we investigate the power consumption of LTE-M and NB-IoT devices using Ublox SARA-R422S modules based on relevant standards from two French operators. The measurements were conducted under different coverage conditions, and we also present an empirical consumption model based on the different states of the radio modem as per the RRC protocol specifications. Our findings indicate that these technologies can achieve a 5 years operational battery life under certain conditions. Moreover, we conclude that the size of transmitted data does not have a significant impact on the total power consumption of the device under favorable coverage conditions. However, it can quickly influence the battery life of the device under harsh coverage conditions. Overall, this paper offers insights into the power consumption of LTE-M and NBIoT devices and provides useful information for those considering the use of these technologies.Keywords: internet of things, LTE-M, NB-IoT, MQTT, cellular IoT, power consumption
Procedia PDF Downloads 14223158 A Deep Learning-Based Pedestrian Trajectory Prediction Algorithm
Authors: Haozhe Xiang
Abstract:
With the rise of the Internet of Things era, intelligent products are gradually integrating into people's lives. Pedestrian trajectory prediction has become a key issue, which is crucial for the motion path planning of intelligent agents such as autonomous vehicles, robots, and drones. In the current technological context, deep learning technology is becoming increasingly sophisticated and gradually replacing traditional models. The pedestrian trajectory prediction algorithm combining neural networks and attention mechanisms has significantly improved prediction accuracy. Based on in-depth research on deep learning and pedestrian trajectory prediction algorithms, this article focuses on physical environment modeling and learning of historical trajectory time dependence. At the same time, social interaction between pedestrians and scene interaction between pedestrians and the environment were handled. An improved pedestrian trajectory prediction algorithm is proposed by analyzing the existing model architecture. With the help of these improvements, acceptable predicted trajectories were successfully obtained. Experiments on public datasets have demonstrated the algorithm's effectiveness and achieved acceptable results.Keywords: deep learning, graph convolutional network, attention mechanism, LSTM
Procedia PDF Downloads 7123157 Time-Frequency Feature Extraction Method Based on Micro-Doppler Signature of Ground Moving Targets
Authors: Ke Ren, Huiruo Shi, Linsen Li, Baoshuai Wang, Yu Zhou
Abstract:
Since some discriminative features are required for ground moving targets classification, we propose a new feature extraction method based on micro-Doppler signature. Firstly, the time-frequency analysis of measured data indicates that the time-frequency spectrograms of the three kinds of ground moving targets, i.e., single walking person, two people walking and a moving wheeled vehicle, are discriminative. Then, a three-dimensional time-frequency feature vector is extracted from the time-frequency spectrograms to depict these differences. At last, a Support Vector Machine (SVM) classifier is trained with the proposed three-dimensional feature vector. The classification accuracy to categorize ground moving targets into the three kinds of the measured data is found to be over 96%, which demonstrates the good discriminative ability of the proposed micro-Doppler feature.Keywords: micro-doppler, time-frequency analysis, feature extraction, radar target classification
Procedia PDF Downloads 40523156 The Phonemic Inventory of Tenyidie Affricates: An Acoustic Study
Authors: NeisaKuonuo Tungoe
Abstract:
Tenyidie, also known as Angami, is spoken by the Angami tribe of Nagaland, North-East India, bordering Myanmar (Burma). It belongs to the Tibeto-Burman language group, falling under the Kuki-Chin-Naga sub-family. Tenyidie studies have seen random attempts at explaining the phonemic inventory of Tenyidie. Different scholars have variously emphasized the grammar or the history of Tenyidie. Many of these claims have been stimulating, but they were often based on a small amount of merely suggestive data or on auditory perception only. The principal objective of this paper is to analyse the affricate segments of Tenyidie as an acoustic study. There are seven categories to the inventory of Tenyidie; Plosives, Nasals, Affricates, Laterals, Rhotics, Fricatives, Semi vowels and Vowels. In all, there are sixty phonemes in the inventory. As mentioned above, the only prominent readings on Tenyidie or affricates in particular are only reflected through auditory perception. As noted above, this study aims to lay out the affricate segments based only on acoustic conclusions. There are seven affricates found in Tenyidie. They are: 1) Voiceless Labiodental Affricate - / pf /, 2) Voiceless Aspirated Labiodental Affricate- / pfh /, 3) Voiceless Alveolar Affricate - / ts /, 4) Voiceless Aspirated Alveolar Affricate - / tsh /, 5) Voiced Alveolar Affricate - / dz /, 6) Voiceless Post-Alveolar Affricate / tʃ / and 7) Voiced Post- Alveolar Affricate- / dʒ /. Since the study is based on acoustic features of affricates, five informants were asked to record their voice with Tenyidie phonemes and English phonemes. Throughout the study of the recorded data, PRAAT, a scientific software program that has made itself indispensible for the analyses of speech in phonetics, have been used as the main software. This data was then used as a comparative study between Tenyidie and English affricates. Comparisons have also been drawn between this study and the work of another author who has stated that there are only six affricates in Tenyidie. The study has been quite detailed regarding the specifics of the data. Detailed accounts of the duration and acoustic cues have been noted. The data will be presented in the form of spectrograms. Since there aren’t any other acoustic related data done on Tenyidie, this study will be the first in the long line of acoustic researches on Tenyidie.Keywords: tenyidie, affricates, praat, phonemic inventory
Procedia PDF Downloads 41723155 Imports of Intermediate Inputs: A Study of the Main Research Streams
Authors: Marta Fernández Olmos, Jorge Fleta, Talia Gómez
Abstract:
This article shares the results of a temporal analysis of the literature on imports of intermediate inputs based on review techniques. The aim of this paper is to identify the main lines of research, their trends, topics, and the research agenda. The internationalization field has attracted considerable scholars and practitioners’ attention in recent years and has grown, rapidly, resulting in a large body of knowledge scattered in different areas of specialization. However, there are no studies that are entirely restricted to imports, intermediate inputs and innovation performance. The performance analysis provided an updated overview of the evolution of the importing literature from 1970 to 2022 and quantitatively identified the most productive and influential journals, articles, authors, and countries. The results show that the current topics are mainly based on modes of importing, innovation performance of importing intermediate imports and collaborations. Future lines of research are identified from topics with lower co-occurrence, such as artificial intelligence, entrepreneurship, and alternative business models such as multinational enterprises (MNEs) versus non-MNEs.Keywords: imports, intermediate inputs, innovation performance, review
Procedia PDF Downloads 7423154 The Interrelations between Niemeyer’s Works and the Concept of Typology: A Computer Based Analysis of Form and Structure
Authors: Aline M. C. Santoro, João C. Pantoja, Eduardo P. Rossetti
Abstract:
While the aim of the modernist movement was to deny known typology, the creation of a new formal language also gave it new meaning, which was now related to Form. This is specifically true in the modern capital of Brazil, where Niemeyer sought to demonstrate the manner in which the new materials available, such as reinforced concrete, were able to produce innovative forms. With this study, we aim to demonstrate the relationship between Niemeyer’s forms and the topological typology known as tessellation, through the presentation of two case studies, the Monument to Caxias and the Saint George Orthodox Church. At a first glance, our purpose is to present the definition of Form, especially with relationship to the works of Niemeyer, seeking to identify in them the concepts presented by Moussavi. Afterwards, we will use a computer-based approach to study and model the forms of two of his buildings with the McNeel Rhinoceros program, where, with the aid of diagrams and renderings, we will be able to clearly and legibly represent their organic forms and further understand their structural systems. When we recognise the concept of typology as a starting point for structural form, it can be concluded that the case studies presented here are encompassed by the typology presented by Moussavi since they derive from his basic structural systems.Keywords: form, Niemeyer, structure, typology, topology
Procedia PDF Downloads 19823153 An Efficient Resource Management Algorithm for Mobility Management in Wireless Mesh Networks
Authors: Mallikarjuna Rao Yamarthy, Subramanyam Makam Venkata, Satya Prasad Kodati
Abstract:
The main objective of the proposed work is to reduce the overall network traffic incurred by mobility management, packet delivery cost and to increase the resource utilization. The proposed algorithm, An Efficient Resource Management Algorithm (ERMA) for mobility management in wireless mesh networks, relies on pointer based mobility management scheme. Whenever a mesh client moves from one mesh router to another, the pointer is set up dynamically between the previous mesh router and current mesh router based on the distance constraints. The algorithm evaluated for signaling cost, data delivery cost and total communication cost performance metrics. The proposed algorithm is demonstrated for both internet sessions and intranet sessions. The proposed algorithm yields significantly better performance in terms of signaling cost, data delivery cost, and total communication cost.Keywords: data delivery cost, mobility management, pointer forwarding, resource management, wireless mesh networks
Procedia PDF Downloads 36723152 Untargeted Small Metabolite Identification from Thermally Treated Tualang Honey
Authors: Lee Suan Chua
Abstract:
This study investigated the effects of thermal treatment on Tualang honey sample in terms of honey colour and heat-induced small metabolites. The heating process was carried out in a temperature controlled water batch at 90 °C for 4 hours. The honey samples were put in cylinder tubes with the dimension of 1 cm diameter and 10 cm length for homogenous heat transfer. The results found that the thermal treatment produced not only hydroxylmethylfurfural, but also other harmful substances such as phthalic anhydride and radiolytic byproducts. The degradation of honey protein was reported due to the detection of free amino acids such as cysteine and phenylalanine in heat-treated honey samples. Sugar dehydration also occurred because fragmented di-galactose was identified based on the presence of characteristic ions in the mass fragmentation pattern. The honey colour was found getting darker as the heating duration was increased up to 4 hours. Approximately, 60 mm PFund of increment was noticed for the honey colour with the colour change rate of 14.8 mm PFund per hour. Based on the principal component analysis, the chemical profile of Tualang honey was significantly altered after 2 hours of heating at 90 °C.Keywords: honey colour, hydroxylmethylfurfural, thermal treatment, tualang honey
Procedia PDF Downloads 37623151 Robust Control of Traction Motors based Electric Vehicles by Means of High-Gain
Authors: H. Mekki, A. Djerioui, S. Zeghlache, L. Chrifi-Alaoui
Abstract:
Induction motor (IM)Induction motor (IM) are nowadays widely used in industrial applications specially in electric vehicles (EVs) and traction locomotives, due to their high efficiency high speed and lifetime. However, since EV motors are easily influenced by un-certainties parameter variations and external load disturbance, both robust control techniques have received considerable attention during the past few decades. This paper present a robust controller design based sliding mode control (SMC) and high gain flux observer (HGO) for induction motor (IM) based Electric Vehicles (EV) drives. This control technique is obtained by the combination between the field oriented and the sliding mode control strategy and present remarkable dynamic performances just as a good robustness with respect to EV drives load torque. A high gain flux observer is also presented and associated in order to design sensorless control by estimating the rotor flux only using measurements of the stator voltages and currents. Simulations results are provided to evaluate the consistency and to show the effectiveness of the proposed SMC strategy also the performance of the HGO for Electric Vehicles system are nowadays widely used in industrial applications specially in electric vehicles (EVs) and traction locomotives, due to their high efficiency high speed and lifetime. However, since EV motors are easily influenced by un-certainties parameter variations and external load disturbance, both robust control techniques have received considerable attention during the past few decades. This paper present a robust controller design based sliding mode control (SMC) and high gain flux observer (HGO) for induction motor (IM) based Electric Vehicles (EV) drives. This control technique is obtained by the combination between the field oriented and the sliding mode control strategy and present remarkable dynamic performances just as a good robustness with respect to EV drives load torque. A high gain flux observer is also presented and associated in order to design sensorless control by estimating the rotor flux only using measurements of the stator voltages and currents. Simulations results are provided to evaluate the consistency and to show the effectiveness of the proposed SMC strategy also the performance of the HGO for Electric Vehicles system.Keywords: electric vehicles, sliding mode control, induction motor drive, high gain observer
Procedia PDF Downloads 7423150 Numerical Investigation Including Mobility Model for the Performances of Piezoresistive Sensors
Authors: Abdelaziz Beddiaf
Abstract:
In this work, we present an analysis based on the study of mobility which is a very important electrical parameter of a piezoresistor and which is directly bound to the piezoresistivity effect in piezoresistive pressure sensors. We determine how the temperature affects mobility when the electric potential is applied. For this, a theoretical approach based on mobility in a p-type Silicon piezoresistor with that of a finite difference model for self-heating is developed. So, the evolution of mobility has been established versus time for different doping levels and with temperature rise provoked by self-heating using a numerical model combined with that of mobility. Furthermore, it has been calculated for some geometrical parameters of the sensor, such as membrane side length and thickness. Also, it is computed as a function of bias voltage. It was observed that mobility is strongly affected by the temperature rise induced by the applied potential when the sensor is actuated for a prolonged time as a consequence of drifting in the output response of the sensor. Finally, this work makes it possible to predict their temperature behavior due to self-heating and to improve this effect by optimizing the geometric properties of the device and by reducing the voltage source applied to the bridge.Keywords: Sensors, Piezoresistivity, Mobility, Bias voltage
Procedia PDF Downloads 9223149 A Geographical Framework for Studying the Territorial Sustainability Based on Land Use Change
Authors: Miguel Ramirez, Ivan Lizarazo
Abstract:
The emergence of various interpretations of sustainability, including weak and strong paradigms, can be traced back to the definition of sustainable development provided in the 1987 Brundtland report and the subsequent evolution of the sustainability concept. However, there has been limited scholarly attention given to clarifying the concept of sustainability within the theoretical and conceptual framework of geography. The discipline has predominantly been focused on understanding the diverse conceptions of sustainability within its epistemological boundaries, resulting in tensions between sustainability paradigms and their associated dimensions, including the incorporation of political perspectives, with particular emphasis on environmental geography's epistemology. In response to this gap, a conceptual framework for sustainability is proposed, effectively integrating spatial and territorial concepts. This framework aims to enhance geography's role in contributing to sustainability by utilizing the land system theory, which is based on the dynamics of land use change. Such an integrated conceptual framework enables incorporating methodological tools such as remote sensing, encompassing various earth observations and fusion methods, and supervised classification techniques. Additionally, it looks for better integration of socioecological information, thereby capturing essential population-related features.Keywords: geography, sustainability, land change science, territorial sustainability
Procedia PDF Downloads 8023148 Determining the Extent and Direction of Relief Transformations Caused by Ski Run Construction Using LIDAR Data
Authors: Joanna Fidelus-Orzechowska, Dominika Wronska-Walach, Jaroslaw Cebulski
Abstract:
Mountain areas are very often exposed to numerous transformations connected with the development of tourist infrastructure. In mountain areas in Poland ski tourism is very popular, so agricultural areas are often transformed into tourist areas. The construction of new ski runs can change the direction and rate of slope development. The main aim of this research was to determine geomorphological and hydrological changes within slopes caused by ski run constructions. The study was conducted in the Remiaszów catchment in the Inner Polish Carpathians (southern Poland). The mean elevation of the catchment is 859 m a.s.l. and the maximum is 946 m a.s.l. The surface area of the catchment is 1.16 km2, of which 16.8% is the area of the two studied ski runs. The studied ski runs were constructed in 2014 and 2015. In order to determine the relief transformations connected with new ski run construction high resolution LIDAR data was analyzed. The general relief changes in the studied catchment were determined on the basis of ALS (Airborne Laser Scanning ) data obtained before (2013) and after (2016) ski run construction. Based on the two sets of ALS data a digital elevation models of differences (DoDs) was created, which made it possible to determine the quantitative relief changes in the entire studied catchment. Additionally, cross and longitudinal profiles were calculated within slopes where new ski runs were built. Detailed data on relief changes within selected test surfaces was obtained based on TLS (Terrestrial Laser Scanning). Hydrological changes within the analyzed catchment were determined based on the convergence and divergence index. The study shows that the construction of the new ski runs caused significant geomorphological and hydrological changes in the entire studied catchment. However, the most important changes were identified within the ski slopes. After the construction of ski runs the entire catchment area lowered about 0.02 m. Hydrological changes in the studied catchment mainly led to the interruption of surface runoff pathways and changes in runoff direction and geometry.Keywords: hydrological changes, mountain areas, relief transformations, ski run construction
Procedia PDF Downloads 14323147 Investigation of Steel-Concrete Composite Bridges under Blasting Loads Based on Slope Reflection
Authors: Yuan Li, Yitao Han, Zhao Zhu
Abstract:
In this paper, the effect of blasting loads on steel-concrete composite bridges has been investigated considering the slope reflection effect. Reasonable values of girder size, plate thickness, stiffening rib, and other design parameters were selected according to design specifications. Modified RHT (Riedel-Hiermaier-Thoma) was used as constitutive relation in analyses. In order to simulate the slope reflection effect, the slope of the bridge was precisely built in the model. Different blasting conditions, including top, middle, and bottom explosions, were simulated. The multi-Euler domain method based on fully coupled Lagrange and Euler models was adopted for the structural analysis of the explosion process using commercial software AUTODYN. The obtained results showed that explosion overpressure was increased by 3006, 879, and 449kPa, corresponding to explosions occurring at the top, middle, and bottom of the slope, respectively. At the same time, due to energy accumulation and transmission dissipation caused by slope reflection, the corresponding yield lengths of steel beams were increased by 8, 0, and 5m, respectively.Keywords: steel-concrete composite bridge, explosion damage, slope reflection, blasting loads, RHT
Procedia PDF Downloads 9623146 Creation of Computerized Benchmarks to Facilitate Preparedness for Biological Events
Abstract:
Introduction: Communicable diseases and pandemics pose a growing threat to the well-being of the global population. A vital component of protecting the public health is the creation and sustenance of a continuous preparedness for such hazards. A joint Israeli-German task force was deployed in order to develop an advanced tool for self-evaluation of emergency preparedness for variable types of biological threats. Methods: Based on a comprehensive literature review and interviews with leading content experts, an evaluation tool was developed based on quantitative and qualitative parameters and indicators. A modified Delphi process was used to achieve consensus among over 225 experts from both Germany and Israel concerning items to be included in the evaluation tool. Validity and applicability of the tool for medical institutions was examined in a series of simulation and field exercises. Results: Over 115 German and Israeli experts reviewed and examined the proposed parameters as part of the modified Delphi cycles. A consensus of over 75% of experts was attained for 183 out of 188 items. The relative importance of each parameter was rated as part of the Delphi process, in order to define its impact on the overall emergency preparedness. The parameters were integrated in computerized web-based software that enables to calculate scores of emergency preparedness for biological events. Conclusions: The parameters developed in the joint German-Israeli project serve as benchmarks that delineate actions to be implemented in order to create and maintain an ongoing preparedness for biological events. The computerized evaluation tool enables to continuously monitor the level of readiness and thus strengths and gaps can be identified and corrected appropriately. Adoption of such a tool is recommended as an integral component of quality assurance of public health and safety.Keywords: biological events, emergency preparedness, bioterrorism, natural biological events
Procedia PDF Downloads 42323145 Probabilistic Building Life-Cycle Planning as a Strategy for Sustainability
Authors: Rui Calejo Rodrigues
Abstract:
Building Refurbishing and Maintenance is a major area of knowledge ultimately dispensed to user/occupant criteria. The optimization of the service life of a building needs a special background to be assessed as it is one of those concepts that needs proficiency to be implemented. ISO 15686-2 Buildings and constructed assets - Service life planning: Part 2, Service life prediction procedures, states a factorial method based on deterministic data for building components life span. Major consequences result on a deterministic approach because users/occupants are not sensible to understand the end of components life span and so simply act on deterministic periods and so costly and resources consuming solutions do not meet global targets of planet sustainability. The estimation of 2 thousand million conventional buildings in the world, if submitted to a probabilistic method for service life planning rather than a deterministic one provide an immense amount of resources savings. Since 1989 the research team nowadays stating for CEES–Center for Building in Service Studies developed a methodology based on Montecarlo method for probabilistic approach regarding life span of building components, cost and service life care time spans. The research question of this deals with the importance of probabilistic approach of buildings life planning compared with deterministic methods. It is presented the mathematic model developed for buildings probabilistic lifespan approach and experimental data is obtained to be compared with deterministic data. Assuming that buildings lifecycle depends a lot on component replacement this methodology allows to conclude on the global impact of fixed replacements methodologies such as those on result of deterministic models usage. Major conclusions based on conventional buildings estimate are presented and evaluated under a sustainable perspective.Keywords: building components life cycle, building maintenance, building sustainability, Montecarlo Simulation
Procedia PDF Downloads 20523144 Application of Local Mean Decomposition for Rolling Bearing Fault Diagnosis Based On Vibration Signals
Authors: Toufik Bensana, Slimane Mekhilef, Kamel Tadjine
Abstract:
Vibration analysis has been frequently applied in the condition monitoring and fault diagnosis of rolling element bearings. Unfortunately, the vibration signals collected from a faulty bearing are generally non stationary, nonlinear and with strong noise interference, so it is essential to obtain the fault features correctly. In this paper, a novel numerical analysis method based on local mean decomposition (LMD) is proposed. LMD decompose the signal into a series of product functions (PFs), each of which is the product of an envelope signal and a purely frequency modulated FM signal. The envelope of a PF is the instantaneous amplitude (IA) and the derivative of the unwrapped phase of a purely flat frequency demodulated (FM) signal is the IF. After that the fault characteristic frequency of the roller bearing can be extracted by performing spectrum analysis to the instantaneous amplitude of PF component containing dominant fault information. The results show the effectiveness of the proposed technique in fault detection and diagnosis of rolling element bearing.Keywords: fault diagnosis, condition monitoring, local mean decomposition, rolling element bearing, vibration analysis
Procedia PDF Downloads 398