Search results for: innovative method and tools
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 23142

Search results for: innovative method and tools

18432 Indoor Real-Time Positioning and Mapping Based on Manhattan Hypothesis Optimization

Authors: Linhang Zhu, Hongyu Zhu, Jiahe Liu

Abstract:

This paper investigated a method of indoor real-time positioning and mapping based on the Manhattan world assumption. In indoor environments, relying solely on feature matching techniques or other geometric algorithms for sensor pose estimation inevitably resulted in cumulative errors, posing a significant challenge to indoor positioning. To address this issue, we adopt the Manhattan world hypothesis to optimize the camera pose algorithm based on feature matching, which improves the accuracy of camera pose estimation. A special processing method was applied to image data frames that conformed to the Manhattan world assumption. When similar data frames appeared subsequently, this could be used to eliminate drift in sensor pose estimation, thereby reducing cumulative errors in estimation and optimizing mapping and positioning. Through experimental verification, it is found that our method achieves high-precision real-time positioning in indoor environments and successfully generates maps of indoor environments. This provides effective technical support for applications such as indoor navigation and robot control.

Keywords: Manhattan world hypothesis, real-time positioning and mapping, feature matching, loopback detection

Procedia PDF Downloads 47
18431 OFDM Radar for Detecting a Rayleigh Fluctuating Target in Gaussian Noise

Authors: Mahboobeh Eghtesad, Reza Mohseni

Abstract:

We develop methods for detecting a target for orthogonal frequency division multiplexing (OFDM) based radars. As a preliminary step we introduce the target and Gaussian noise models in discrete–time form. Then, resorting to match filter (MF) we derive a detector for two different scenarios: a non-fluctuating target and a Rayleigh fluctuating target. It will be shown that a MF is not suitable for Rayleigh fluctuating targets. In this paper we propose a reduced-complexity method based on fast Fourier transfrom (FFT) for such a situation. The proposed method has better detection performance.

Keywords: constant false alarm rate (CFAR), match filter (MF), fast Fourier transform (FFT), OFDM radars, Rayleigh fluctuating target

Procedia PDF Downloads 344
18430 Modulational Instability of Ion-Acoustic Wave in Electron-Positron-Ion Plasmas with Two-Electron Temperature Distributions

Authors: Jitendra Kumar Chawla, Mukesh Kumar Mishra

Abstract:

The nonlinear amplitude modulation of ion-acoustic wave is studied in the presence of two-electron temperature distribution in unmagnetized electron-positron-ion plasmas. The Krylov-Bogoliubov-Mitropolosky (KBM) perturbation method is used to derive the nonlinear Schrödinger equation. The dispersive and nonlinear coefficients are obtained which depend on the temperature and concentration of the hot and cold electron species as well as the positron density and temperature. The modulationally unstable regions are studied numerically for a wide range of wave number. The effects of the temperature and concentration of the hot and cold electron on the modulational stability are investigated in detail.

Keywords: modulational instability, ion acoustic wave, KBM method

Procedia PDF Downloads 649
18429 Microwave Dielectric Relaxation Study of Diethanolamine with Triethanolamine from 10 MHz-20 GHz

Authors: A. V. Patil

Abstract:

The microwave dielectric relaxation study of diethanolamine with triethanolamine binary mixture have been determined over the frequency range of 10 MHz to 20 GHz, at various temperatures using time domain reflectometry (TDR) method for 11 concentrations of the system. The present work reveals molecular interaction between same multi-functional groups [−OH and –NH2] of the alkanolamines (diethanolamine and triethanolamine) using different models such as Debye model, Excess model, and Kirkwood model. The dielectric parameters viz. static dielectric constant (ε0) and relaxation time (τ) have been obtained with Debye equation characterized by a single relaxation time without relaxation time distribution by the least squares fit method.

Keywords: diethanolamine, excess properties, kirkwood properties, time domain reflectometry, triethanolamine

Procedia PDF Downloads 290
18428 Socio-Cultural Adaptation Approach to Enhance Intercultural Collaboration and Learning

Authors: Fadoua Ouamani, Narjès Bellamine Ben Saoud, Henda Hajjami Ben Ghézala

Abstract:

In the last few years and over the last decades, there was a growing interest in the development of Computer Supported Collaborative Learning (CSCL) environments. However, the existing systems ignore the variety of learners and their socio-cultural differences, especially in the case of distant and networked learning. In fact, within such collaborative learning environments, learners from different socio-cultural backgrounds may interact together. These learners evolve within various cultures and social contexts and acquire different socio-cultural values and behaviors. Thus, they should be assisted while communicating and collaborating especially in an intercultural group. Besides, the communication and collaboration tools provided to each learner must depend on and be adapted to her/his socio-cultural profile. The main goal of this paper is to present the proposed socio-cultural adaptation approach based on and guided by ontologies to adapt CSCL environments to the socio-cultural profiles of its users (learners or others).

Keywords: CSCL, socio-cultural profile, adaptation, ontology

Procedia PDF Downloads 348
18427 Gender Diversity in Early Years Education: An Exploratory Study Applied to Preschool Curriculum System in Romania

Authors: Emilia-Gheorghina Negru

Abstract:

As an EU goal, gender diversity in early year’s education aims and promotes equality of chances and respect for gender peculiarities of the pupils which are involved in formal educational activities. Early year’s education, as the first step to the Curriculum, prints to teachers the need to identify the role of the gender dimension on this stage, depending on the age level of preschool children through effective, complex, innovative and analytical awareness of gender diversity teaching and management strategies. Through gender educational work we, as teachers, will examine the effectiveness of the PATHS (Promoting Alternative Thinking Strategies) curriculum the gender development of school-aged children. PATHS and a school-based preventive intervention model are necessary to be designed to improve children's ability to discuss and understand equality and gender concepts. Our teachers must create an intervention model and provide PATHS lessons during the school year. Results of the intervention will be effective for both low- and high-risk children in improving their range of math’s skills for girls and vocabulary, fluency and emotional part for boys in discussing gender experiences, their efficacy beliefs regarding the management of equality in gender area, and their developmental understanding of some aspects of gender.

Keywords: gender, gender differences, gender equality, gender role, gender stereotypes

Procedia PDF Downloads 359
18426 A Classical Method of Optimizing Manufacturing Systems Using a Number of Industrial Engineering Techniques

Authors: John M. Ikome, Martha E. Ikome, Therese Van Wyk

Abstract:

Productivity optimization of a company can significantly increase the company’s output and productivity which can be in the form of corrective actions of ineffective activities, process simplification, and reduction of variations, responsiveness, and reduction of set-up-time which are all under the classification of waste within the manufacturing environment. Deriving a means to eliminate a number of these issues has a key importance for manufacturing organization. This paper focused on a number of industrial engineering techniques which include a cause and effect diagram, to identify and optimize the method or systems being used. Based on our results, it shows that there are a number of variations within the production processes that can significantly disrupt the expected output.

Keywords: optimization, fishbone, diagram, productivity

Procedia PDF Downloads 297
18425 Artificial Neural Network Approach for GIS-Based Soil Macro-Nutrients Mapping

Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Siti Khairunniza Bejo

Abstract:

Conventional methods for nutrient soil mapping are based on laboratory tests of samples that are obtained from surveys. The time and cost involved in gathering and analyzing soil samples are the reasons that researchers use Predictive Soil Mapping (PSM). PSM can be defined as the development of a numerical or statistical model of the relationship among environmental variables and soil properties, which is then applied to a geographic database to create a predictive map. Kriging is a group of geostatistical techniques to spatially interpolate point values at an unobserved location from observations of values at nearby locations. The main problem with using kriging as an interpolator is that it is excessively data-dependent and requires a large number of closely spaced data points. Hence, there is a need to minimize the number of data points without sacrificing the accuracy of the results. In this paper, an Artificial Neural Networks (ANN) scheme was used to predict macronutrient values at un-sampled points. ANN has become a popular tool for prediction as it eliminates certain difficulties in soil property prediction, such as non-linear relationships and non-normality. Back-propagation multilayer feed-forward network structures were used to predict nitrogen, phosphorous and potassium values in the soil of the study area. A limited number of samples were used in the training, validation and testing phases of ANN (pattern reconstruction structures) to classify soil properties and the trained network was used for prediction. The soil analysis results of samples collected from the soil survey of block C of Sawah Sempadan, Tanjung Karang rice irrigation project at Selangor of Malaysia were used. Soil maps were produced by the Kriging method using 236 samples (or values) that were a combination of actual values (obtained from real samples) and virtual values (neural network predicted values). For each macronutrient element, three types of maps were generated with 118 actual and 118 virtual values, 59 actual and 177 virtual values, and 30 actual and 206 virtual values, respectively. To evaluate the performance of the proposed method, for each macronutrient element, a base map using 236 actual samples and test maps using 118, 59 and 30 actual samples respectively produced by the Kriging method. A set of parameters was defined to measure the similarity of the maps that were generated with the proposed method, termed the sample reduction method. The results show that the maps that were generated through the sample reduction method were more accurate than the corresponding base maps produced through a smaller number of real samples. For example, nitrogen maps that were produced from 118, 59 and 30 real samples have 78%, 62%, 41% similarity, respectively with the base map (236 samples) and the sample reduction method increased similarity to 87%, 77%, 71%, respectively. Hence, this method can reduce the number of real samples and substitute ANN predictive samples to achieve the specified level of accuracy.

Keywords: artificial neural network, kriging, macro nutrient, pattern recognition, precision farming, soil mapping

Procedia PDF Downloads 57
18424 Aerodynamic Prediction and Performance Analysis for Mars Science Laboratory Entry Vehicle

Authors: Tang Wei, Yang Xiaofeng, Gui Yewei, Du Yanxia

Abstract:

Complex lifting entry was selected for precise landing performance during the Mars Science Laboratory entry. This study aims to develop the three-dimensional numerical method for precise computation and the surface panel method for rapid engineering prediction. Detailed flow field analysis for Mars exploration mission was performed by carrying on a series of fully three-dimensional Navier-Stokes computations. The static aerodynamic performance was then discussed, including the surface pressure, lift and drag coefficient, lift-to-drag ratio with the numerical and engineering method. Computation results shown that the shock layer is thin because of lower effective specific heat ratio, and that calculated results from both methods agree well with each other, and is consistent with the reference data. Aerodynamic performance analysis shows that CG location determines trim characteristics and pitch stability, and certain radially and axially shift of the CG location can alter the capsule lifting entry performance, which is of vital significance for the aerodynamic configuration des0ign and inner instrument layout of the Mars entry capsule.

Keywords: Mars entry capsule, static aerodynamics, computational fluid dynamics, hypersonic

Procedia PDF Downloads 288
18423 An Optimization of Machine Parameters for Modified Horizontal Boring Tool Using Taguchi Method

Authors: Thirasak Panyaphirawat, Pairoj Sapsmarnwong, Teeratas Pornyungyuen

Abstract:

This paper presents the findings of an experimental investigation of important machining parameters for the horizontal boring tool modified to mouth with a horizontal lathe machine to bore an overlength workpiece. In order to verify a usability of a modified tool, design of experiment based on Taguchi method is performed. The parameters investigated are spindle speed, feed rate, depth of cut and length of workpiece. Taguchi L9 orthogonal array is selected for four factors three level parameters in order to minimize surface roughness (Ra and Rz) of S45C steel tubes. Signal to noise ratio analysis and analysis of variance (ANOVA) is performed to study an effect of said parameters and to optimize the machine setting for best surface finish. The controlled factors with most effect are depth of cut, spindle speed, length of workpiece, and feed rate in order. The confirmation test is performed to test the optimal setting obtained from Taguchi method and the result is satisfactory.

Keywords: design of experiment, Taguchi design, optimization, analysis of variance, machining parameters, horizontal boring tool

Procedia PDF Downloads 424
18422 Technology Computer Aided Design Simulation of Space Charge Limited Conduction in Polycrystalline Thin Films

Authors: Kunj Parikh, S. Bhattacharya, V. Natarajan

Abstract:

TCAD numerical simulation is one of the most tried and tested powerful tools for designing devices in semiconductor foundries worldwide. It has also been used to explain conduction in organic thin films where the processing temperature is often enough to make homogeneous samples (often imperfect, but homogeneously imperfect). In this report, we have presented the results of TCAD simulation in multi-grain thin films. The work has addressed the inhomogeneity in one dimension, but can easily be extended to two and three dimensions. The effect of grain boundaries has mainly been approximated as barriers located at the junction between two adjacent grains. The effect of the value of grain boundary barrier, the bulk traps, and the measurement temperature have been investigated.

Keywords: polycrystalline thin films, space charge limited conduction, Technology Computer-Aided Design (TCAD) simulation, traps

Procedia PDF Downloads 200
18421 Evaluation of Soil Modulus Variation by IS 2911 and Broms Method

Authors: Mandeep Kamboj, Anand R. Katti

Abstract:

The pile of 2.4 m diameter is subjected to lateral loads and moments. These lateral loads are caused due to wind/wave forces when used in foundations of various structures such as bridge piers and high rise towers exhibiting deflections with depth. The research scientist and developer has studied and developed various procedures to evaluate the coefficient of soil modulus variation (nh), using various methods. These are verified for slender piles in sand with various diameters up to 2.4 m. The subject explains about simplified approach of the theoretical values using IS procedure and Broms method and compared with actual field soil pressure/displacement distributions measured in mono-pile along its length and across the diameter.

Keywords: bridge pier, lateral loads, mono-pile, slender piles

Procedia PDF Downloads 176
18420 Canada Deuterium Uranium Updated Fire Probabilistic Risk Assessment Model for Canadian Nuclear Plants

Authors: Hossam Shalabi, George Hadjisophocleous

Abstract:

The Canadian Nuclear Power Plants (NPPs) use some portions of NUREG/CR-6850 in carrying out Fire Probabilistic Risk Assessment (PRA). An assessment for the applicability of NUREG/CR-6850 to CANDU reactors was performed and a CANDU Fire PRA was introduced. There are 19 operating CANDU reactors in Canada at five sites (Bruce A, Bruce B, Darlington, Pickering and Point Lepreau). A fire load density survey was done for all Fire Safe Shutdown Analysis (FSSA) fire zones in all CANDU sites in Canada. National Fire Protection Association (NFPA) Standard 557 proposes that a fire load survey must be conducted by either the weighing method or the inventory method or a combination of both. The combination method results in the most accurate values for fire loads. An updated CANDU Fire PRA model is demonstrated in this paper that includes the fuel survey in all Canadian CANDU stations. A qualitative screening step for the CANDU fire PRA is illustrated in this paper to include any fire events that can damage any part of the emergency power supply in addition to FSSA cables.

Keywords: fire safety, CANDU, nuclear, fuel densities, FDS, qualitative analysis, fire probabilistic risk assessment

Procedia PDF Downloads 126
18419 Seismic Vulnerability Analysis of Arch Dam Based on Response Surface Method

Authors: Serges Mendomo Meye, Li Guowei, Shen Zhenzhong

Abstract:

Earthquake is one of the main loads threatening dam safety. Once the dam is damaged, it will bring huge losses of life and property to the country and people. Therefore, it is very important to research the seismic safety of the dam. Due to the complex foundation conditions, high fortification intensity, and high scientific and technological content, it is necessary to adopt reasonable methods to evaluate the seismic safety performance of concrete arch dams built and under construction in strong earthquake areas. Structural seismic vulnerability analysis can predict the probability of structural failure at all levels under different intensity earthquakes, which can provide a scientific basis for reasonable seismic safety evaluation and decision-making. In this paper, the response surface method (RSM) is applied to the seismic vulnerability analysis of arch dams, which improves the efficiency of vulnerability analysis. Based on the central composite test design method, the material-seismic intensity samples are established. The response surface model (RSM) with arch crown displacement as performance index is obtained by finite element (FE) calculation of the samples, and then the accuracy of the response surface model (RSM) is verified. To obtain the seismic vulnerability curves, the seismic intensity measure ??(?1) is chosen to be 0.1~1.2g, with an interval of 0.1g and a total of 12 intensity levels. For each seismic intensity level, the arch crown displacement corresponding to 100 sets of different material samples can be calculated by algebraic operation of the response surface model (RSM), which avoids 1200 times of nonlinear dynamic calculation of arch dam; thus, the efficiency of vulnerability analysis is improved greatly.

Keywords: high concrete arch dam, performance index, response surface method, seismic vulnerability analysis, vector-valued intensity measure

Procedia PDF Downloads 231
18418 Effect of Testing Device Calibration on Liquid Limit Assessment

Authors: M. O. Bayram, H. B. Gencdal, N. O. Fercan, B. Basbug

Abstract:

Liquid limit, which is used as a measure of soil strength, can be detected by Casagrande and fall-cone testing methods. The two methods majorly diverge from each other in terms of operator dependency. The Casagrande method that is applied according to ASTM D4318-17 standards may give misleading results, especially if the calibration process is not performed well. To reveal the effect of calibration for drop height and amount of soil paste placement in the Casagrande cup, a series of tests were carried out by multipoint method as it is specified in the ASTM standards. The tests include the combination of 6 mm, 8 mm, 10 mm, and 12 mm drop heights and under-filled, half-filled, and full-filled Casagrande cups by kaolinite samples. It was observed that during successive tests, the drop height of the cup deteriorated; hence the device was recalibrated before and after each test to provide the accuracy of the results. Besides, the tests by under-filled and full-filled samples for higher drop heights revealed lower liquid limit values than the lower drop heights revealed. For the half-filled samples, it was clearly seen that the liquid limit values didn’t change at all as the drop height increased, and this explains the function of standard specifications.

Keywords: calibration, casagrande cup method, drop height, kaolinite, liquid limit, placing form

Procedia PDF Downloads 143
18417 Factors Impacting Geostatistical Modeling Accuracy and Modeling Strategy of Fluvial Facies Models

Authors: Benbiao Song, Yan Gao, Zhuo Liu

Abstract:

Geostatistical modeling is the key technic for reservoir characterization, the quality of geological models will influence the prediction of reservoir performance greatly, but few studies have been done to quantify the factors impacting geostatistical reservoir modeling accuracy. In this study, 16 fluvial prototype models have been established to represent different geological complexity, 6 cases range from 16 to 361 wells were defined to reproduce all those 16 prototype models by different methodologies including SIS, object-based and MPFS algorithms accompany with different constraint parameters. Modeling accuracy ratio was defined to quantify the influence of each factor, and ten realizations were averaged to represent each accuracy ratio under the same modeling condition and parameters association. Totally 5760 simulations were done to quantify the relative contribution of each factor to the simulation accuracy, and the results can be used as strategy guide for facies modeling in the similar condition. It is founded that data density, geological trend and geological complexity have great impact on modeling accuracy. Modeling accuracy may up to 90% when channel sand width reaches up to 1.5 times of well space under whatever condition by SIS and MPFS methods. When well density is low, the contribution of geological trend may increase the modeling accuracy from 40% to 70%, while the use of proper variogram may have very limited contribution for SIS method. It can be implied that when well data are dense enough to cover simple geobodies, few efforts were needed to construct an acceptable model, when geobodies are complex with insufficient data group, it is better to construct a set of robust geological trend than rely on a reliable variogram function. For object-based method, the modeling accuracy does not increase obviously as SIS method by the increase of data density, but kept rational appearance when data density is low. MPFS methods have the similar trend with SIS method, but the use of proper geological trend accompany with rational variogram may have better modeling accuracy than MPFS method. It implies that the geological modeling strategy for a real reservoir case needs to be optimized by evaluation of dataset, geological complexity, geological constraint information and the modeling objective.

Keywords: fluvial facies, geostatistics, geological trend, modeling strategy, modeling accuracy, variogram

Procedia PDF Downloads 249
18416 Case Study about Women Driving in Saudi Arabia Announced in 2018: Netnographic and Data Mining Study

Authors: Majdah Alnefaie

Abstract:

The ‘netnographic study’ and data mining have been used to monitor the public interaction on Social Media Sites (SMSs) to understand what the motivational factors influence the Saudi intentions regarding allowing women driving in Saudi Arabia in 2018. The netnographic study monitored the publics’ textual and visual communications in Twitter, Snapchat, and YouTube. SMSs users’ communications method is also known as electronic word of mouth (eWOM). Netnography methodology is still in its initial stages as it depends on manual extraction, reading and classification of SMSs users text. On the other hand, data mining is come from the computer and physical sciences background, therefore it is much harder to extract meaning from unstructured qualitative data. In addition, the new development in data mining software does not support the Arabic text, especially local slang in Saudi Arabia. Therefore, collaborations between social and computer scientists such as ‘netnographic study’ and data mining will enhance the efficiency of this study methodology leading to comprehensive research outcome. The eWOM communications between individuals on SMSs can promote a sense that sharing their preferences and experiences regarding politics and social government regulations is a part of their daily life, highlighting the importance of using SMSs as assistance in promoting participation in political and social. Therefore, public interactions on SMSs are important tools to comprehend people’s intentions regarding the new government regulations in the country. This study aims to answer this question, "What factors influence the Saudi Arabians' intentions of Saudi female's car-driving in 2018". The study utilized qualitative method known as netnographic study. The study used R studio to collect and analyses 27000 Saudi users’ comments from 25th May until 25th June 2018. The study has developed data collection model that support importing and analysing the Arabic text in the local slang. The data collection model in this study has been clustered based on different type of social networks, gender and the study main factors. The social network analysis was employed to collect comments from SMSs owned by governments’ originations, celebrities, vloggers, social activist and news SMSs accounts. The comments were collected from both males and females SMSs users. The sentiment analysis shows that the total number of positive comments Saudi females car driving was higher than negative comments. The data have provided the most important factors influenced the Saudi Arabians’ intention of Saudi females car driving including, culture and environment, freedom of choice, equal opportunities, security and safety. The most interesting finding indicted that women driving would play a role in increasing the individual freedom of choice. Saudi female will be able to drive cars to fulfill her daily life and family needs without being stressed due to the lack of transportation. The study outcome will help Saudi government to improve woman quality of life by increasing the ability to find more jobs and studies, increasing income through decreasing the spending on transport means such as taxi and having more freedom of choice in woman daily life needs. The study enhances the importance of using use marketing research to measure the public opinions on the new government regulations in the country. The study has explained the limitations and suggestions for future research.

Keywords: netnographic study, data mining, social media, Saudi Arabia, female driving

Procedia PDF Downloads 138
18415 Counter-Current Extraction of Fish Oil and Toxic Elements from Fish Waste Using Supercritical Carbon Dioxide

Authors: Parvaneh Hajeb, Shahram Shakibazadeh, Md. Zaidul Islam Sarker

Abstract:

High-quality fish oil for human consumption requires low levels of toxic elements. The aim of this study was to develop a method to extract oil from fish wastes with the least toxic elements contamination. Supercritical fluid extraction (SFE) was applied to detoxify fish oils from toxic elements. The SFE unit used consisted of an intelligent HPLC pump equipped with a cooling jacket to deliver CO2. The freeze-dried fish waste sample was extracted by heating in a column oven. Under supercritical conditions, the oil dissolved in CO2 was separated from the supercritical phase using pressure reduction. The SFE parameters (pressure, temperature, CO2 flow rate, and extraction time) were optimized using response surface methodology (RSM) to extract the highest levels of toxic elements. The results showed that toxic elements in fish oil can be reduced using supercritical CO2 at optimum pressure 40 MPa, temperature 61 ºC, CO2 flow rate 3.8 MPa, and extraction time 4.25 hr. There were significant reductions in the mercury (98.2%), cadmium (98.9%), arsenic (96%), and lead contents (99.2%) of the fish oil. The fish oil extracted using this method contained elements at levels that were much lower than the accepted limits of 0.1 μg/g. The reduction of toxic elements using the SFE method was more efficient than that of the conventional methods due to the high selectivity of supercritical CO2 for non-polar compounds.

Keywords: food safety, toxic elements, fish oil, supercritical carbon dioxide

Procedia PDF Downloads 410
18414 Finite Difference Method of the Seismic Analysis of Earth Dam

Authors: Alaoua Bouaicha, Fahim Kahlouche, Abdelhamid Benouali

Abstract:

Many embankment dams have suffered failures during earthquakes due to the increase of pore water pressure under seismic loading. After analyzing of the behavior of embankment dams under severe earthquakes, major advances have been attained in the understanding of the seismic action on dams. The present study concerns numerical analysis of the seismic response of earth dams. The procedure uses a nonlinear stress-strain relation incorporated into the code FLAC2D based on the finite difference method. This analysis provides the variation of the pore water pressure and horizontal displacement.

Keywords: Earthquake, Numerical Analysis, FLAC2D, Displacement, Embankment Dam, Pore Water Pressure

Procedia PDF Downloads 368
18413 Frobenius Manifolds Pairing and Invariant Theory

Authors: Zainab Al-Maamari, Yassir Dinar

Abstract:

The orbit space of an irreducible representation of a finite group is a variety with the ring of invariant polynomials as a coordinate ring. The invariant ring is a polynomial ring if and only if the representation is a reflection representation. Boris Dubrovin shows that the orbits spaces of irreducible real reflection representations acquire the structure of polynomial Frobenius manifolds. Dubrovin's method was also used to construct different examples of Frobenius manifolds on certain reflection representations. By successfully applying Dubrovin’s method on non-polynomial invariant rings of linear representations of dicyclic groups, it gives some results that magnify the relation between invariant theory and Frobenius manifolds.

Keywords: invariant ring, Frobenius manifold, inversion, representation theory

Procedia PDF Downloads 83
18412 Effect of Cost Control and Cost Reduction Techniques in Organizational Performance

Authors: Babatunde Akeem Lawal

Abstract:

In any organization, the primary aim is to maximize profit, but the major challenges facing them is the increase in cost of operation because of this there is increase in cost of production that could lead to inevitable cost control and cost reduction scheme which make it difficult for most organizations to operate at the cost efficient frontier. The study aims to critically examine and evaluate the application of cost control and cost reduction in organization performance and also to review budget as an effective tool of cost control and cost reduction. A descriptive survey research was adopted. A total number of 40 respondent retrieved were used for the study. The analysis of data collected was undertaken by applying appropriate statistical tools. Regression analysis was used to test the hypothesis with the use of SPSS. Based on the findings; it was evident that cost control has a positive impact on organizational performance and also the style of management has a positive impact on organizational performance.

Keywords: organization, cost reduction, cost control, performance, budget, profit

Procedia PDF Downloads 580
18411 The Problem of the Use of Learning Analytics in Distance Higher Education: An Analytical Study of the Open and Distance University System in Mexico

Authors: Ismene Ithai Bras-Ruiz

Abstract:

Learning Analytics (LA) is employed by universities not only as a tool but as a specialized ground to enhance students and professors. However, not all the academic programs apply LA with the same goal and use the same tools. In fact, LA is formed by five main fields of study (academic analytics, action research, educational data mining, recommender systems, and personalized systems). These fields can help not just to inform academic authorities about the situation of the program, but also can detect risk students, professors with needs, or general problems. The highest level applies Artificial Intelligence techniques to support learning practices. LA has adopted different techniques: statistics, ethnography, data visualization, machine learning, natural language process, and data mining. Is expected that any academic program decided what field wants to utilize on the basis of his academic interest but also his capacities related to professors, administrators, systems, logistics, data analyst, and the academic goals. The Open and Distance University System (SUAYED in Spanish) of the University National Autonomous of Mexico (UNAM), has been working for forty years as an alternative to traditional programs; one of their main supports has been the employ of new information and communications technologies (ICT). Today, UNAM has one of the largest network higher education programs, twenty-six academic programs in different faculties. This situation means that every faculty works with heterogeneous populations and academic problems. In this sense, every program has developed its own Learning Analytic techniques to improve academic issues. In this context, an investigation was carried out to know the situation of the application of LA in all the academic programs in the different faculties. The premise of the study it was that not all the faculties have utilized advanced LA techniques and it is probable that they do not know what field of study is closer to their program goals. In consequence, not all the programs know about LA but, this does not mean they do not work with LA in a veiled or, less clear sense. It is very important to know the grade of knowledge about LA for two reasons: 1) This allows to appreciate the work of the administration to improve the quality of the teaching and, 2) if it is possible to improve others LA techniques. For this purpose, it was designed three instruments to determinate the experience and knowledge in LA. These were applied to ten faculty coordinators and his personnel; thirty members were consulted (academic secretary, systems manager, or data analyst, and coordinator of the program). The final report allowed to understand that almost all the programs work with basic statistics tools and techniques, this helps the administration only to know what is happening inside de academic program, but they are not ready to move up to the next level, this means applying Artificial Intelligence or Recommender Systems to reach a personalized learning system. This situation is not related to the knowledge of LA, but the clarity of the long-term goals.

Keywords: academic improvements, analytical techniques, learning analytics, personnel expertise

Procedia PDF Downloads 115
18410 Gas Pressure Evaluation through Radial Velocity Measurement of Fluid Flow Modeled by Drift Flux Model

Authors: Aicha Rima Cheniti, Hatem Besbes, Joseph Haggege, Christophe Sintes

Abstract:

In this paper, we consider a drift flux mixture model of the blood flow. The mixture consists of gas phase which is carbon dioxide and liquid phase which is an aqueous carbon dioxide solution. This model was used to determine the distributions of the mixture velocity, the mixture pressure, and the carbon dioxide pressure. These theoretical data are used to determine a measurement method of mean gas pressure through the determination of radial velocity distribution. This method can be applicable in experimental domain.

Keywords: mean carbon dioxide pressure, mean mixture pressure, mixture velocity, radial velocity

Procedia PDF Downloads 308
18409 A Reinforcement Learning Based Method for Heating, Ventilation, and Air Conditioning Demand Response Optimization Considering Few-Shot Personalized Thermal Comfort

Authors: Xiaohua Zou, Yongxin Su

Abstract:

The reasonable operation of heating, ventilation, and air conditioning (HVAC) is of great significance in improving the security, stability, and economy of power system operation. However, the uncertainty of the operating environment, thermal comfort varies by users and rapid decision-making pose challenges for HVAC demand response optimization. In this regard, this paper proposes a reinforcement learning-based method for HVAC demand response optimization considering few-shot personalized thermal comfort (PTC). First, an HVAC DR optimization framework based on few-shot PTC model and DRL is designed, in which the output of few-shot PTC model is regarded as the input of DRL. Then, a few-shot PTC model that distinguishes between awake and asleep states is established, which has excellent engineering usability. Next, based on soft actor criticism, an HVAC DR optimization algorithm considering the user’s PTC is designed to deal with uncertainty and make decisions rapidly. Experiment results show that the proposed method can efficiently obtain use’s PTC temperature, reduce energy cost while ensuring user’s PTC, and achieve rapid decision-making under uncertainty.

Keywords: HVAC, few-shot personalized thermal comfort, deep reinforcement learning, demand response

Procedia PDF Downloads 63
18408 Brainbow Image Segmentation Using Bayesian Sequential Partitioning

Authors: Yayun Hsu, Henry Horng-Shing Lu

Abstract:

This paper proposes a data-driven, biology-inspired neural segmentation method of 3D drosophila Brainbow images. We use Bayesian Sequential Partitioning algorithm for probabilistic modeling, which can be used to detect somas and to eliminate cross talk effects. This work attempts to develop an automatic methodology for neuron image segmentation, which nowadays still lacks a complete solution due to the complexity of the image. The proposed method does not need any predetermined, risk-prone thresholds since biological information is inherently included in the image processing procedure. Therefore, it is less sensitive to variations in neuron morphology; meanwhile, its flexibility would be beneficial for tracing the intertwining structure of neurons.

Keywords: brainbow, 3D imaging, image segmentation, neuron morphology, biological data mining, non-parametric learning

Procedia PDF Downloads 474
18407 A Validated UPLC-MS/MS Assay Using Negative Ionization Mode for High-Throughput Determination of Pomalidomide in Rat Plasma

Authors: Muzaffar Iqbal, Essam Ezzeldin, Khalid A. Al-Rashood

Abstract:

Pomalidomide is a second generation oral immunomodulatory agent, being used for the treatment of multiple myeloma in patients with disease refractory to lenalidomide and bortezomib. In this study, a sensitive UPLC-MS/MS assay was developed and validated for high-throughput determination of pomalidomide in rat plasma using celecoxib as an internal standard (IS). Liquid liquid extraction using dichloromethane as extracting agent was employed to extract pomalidomide and IS from 200 µL of plasma. Chromatographic separation was carried on Acquity BEHTM C18 column (50 × 2.1 mm, 1.7 µm) using an isocratic mobile phase of acetonitrile:10 mM ammonium acetate (80:20, v/v), at a flow rate of 0.250 mL/min. Both pomalidomide and IS were eluted at 0.66 ± 0.03 and 0.80 ± 0.03 min, respectively with a total run time of 1.5 min only. Detection was performed on a triple quadrupole tandem mass spectrometer using electrospray ionization in negative mode. The precursor to product ion transitions of m/z 272.01 → 160.89 for pomalidomide and m/z 380.08 → 316.01 for IS were used to quantify them respectively, using multiple reaction monitoring mode. The developed method was validated according to regulatory guideline for bioanalytical method validation. The linearity in plasma sample was achieved in the concentration range of 0.47–400 ng/mL (r2 ≥ 0.997). The intra and inter-day precision values were ≤ 11.1% (RSD, %) whereas accuracy values ranged from - 6.8 – 8.5% (RE, %). In addition, other validation results were within the acceptance criteria and the method was successfully applied in a pharmacokinetic study of pomalidomide in rats.

Keywords: pomalidomide, pharmacokinetics, LC-MS/MS, celecoxib

Procedia PDF Downloads 370
18406 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading

Authors: Robert Caulk

Abstract:

A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.

Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration

Procedia PDF Downloads 76
18405 Controlling of Water Temperature during the Electrocoagulation Process Using an Innovative Flow Columns -Electrocoagulation Reactor

Authors: Khalid S. Hashim, Andy Shaw, Rafid Alkhaddar, Montserrat Ortoneda Pedrola

Abstract:

A flow column has been innovatively used in the design of a new electrocoagulation reactor (ECR1) that will reduce the temperature of water being treated; where the flow columns work as a radiator for the water being treated. In order to investigate the performance of ECR1 and compare it to that of traditional reactors; 600 mL water samples with an initial temperature of 35 0C were pumped continuously through these reactors for 30 min at current density of 1 mA/cm2. The temperature of water being treated was measured at 5 minutes intervals over a 30 minutes period using a thermometer. Additional experiments were commenced to investigate the effects of initial temperature (15-35 0C), water conductivity (0.15 – 1.2 S) and current density (0.5 -3 mA/cm2) on the performance of ECR1. The results obtained demonstrated that the ECR1, at a current density of 1 mA/cm2 and continuous flow model, reduced water temperature from 35 0C to the vicinity of 28 0C during the first 15 minutes and kept the same level till the end of the treatment time. While, the temperature increased from 28.1 to 29.8 0C and from 29.8 to 31.9 0C in the batch and the traditional continuous flow models respectively. In term of initial temperature, ECR1 maintained the temperature of water being treated within the range of 22 to 28 0C without the need for external cooling system even when the initial temperatures varied over a wide range (15 to 35 0C). The influent water conductivity was found to be a significant variable that affect the temperature. The desirable value of water conductivity is 0.6 S. However, it was found that the water temperature increased rapidly with a higher current density.

Keywords: water temperature, flow column, electrocoagulation

Procedia PDF Downloads 363
18404 Concentration of Droplets in a Transient Gas Flow

Authors: Timur S. Zaripov, Artur K. Gilfanov, Sergei S. Sazhin, Steven M. Begg, Morgan R. Heikal

Abstract:

The calculation of the concentration of inertial droplets in complex flows is encountered in the modelling of numerous engineering and environmental phenomena; for example, fuel droplets in internal combustion engines and airborne pollutant particles. The results of recent research, focused on the development of methods for calculating concentration and their implementation in the commercial CFD code, ANSYS Fluent, is presented here. The study is motivated by the investigation of the mixture preparation processes in internal combustion engines with direct injection of fuel sprays. Two methods are used in our analysis; the Fully Lagrangian method (also known as the Osiptsov method) and the Eulerian approach. The Osiptsov method predicts droplet concentrations along path lines by solving the equations for the components of the Jacobian of the Eulerian-Lagrangian transformation. This method significantly decreases the computational requirements as it does not require counting of large numbers of tracked droplets as in the case of the conventional Lagrangian approach. In the Eulerian approach the average droplet velocity is expressed as a function of the carrier phase velocity as an expansion over the droplet response time and transport equation can be solved in the Eulerian form. The advantage of the method is that droplet velocity can be found without solving additional partial differential equations for the droplet velocity field. The predictions from the two approaches were compared in the analysis of the problem of a dilute gas-droplet flow around an infinitely long, circular cylinder. The concentrations of inertial droplets, with Stokes numbers of 0.05, 0.1, 0.2, in steady-state and transient laminar flow conditions, were determined at various Reynolds numbers. In the steady-state case, flows with Reynolds numbers of 1, 10, and 100 were investigated. It has been shown that the results predicted using both methods are almost identical at small Reynolds and Stokes numbers. For larger values of these numbers (Stokes — 0.1, 0.2; Reynolds — 10, 100) the Eulerian approach predicted a wider spread in concentration in the perturbations caused by the cylinder that can be attributed to the averaged droplet velocity field. The transient droplet flow case was investigated for a Reynolds number of 200. Both methods predicted a high droplet concentration in the zones of high strain rate and low concentrations in zones of high vorticity. The maxima of droplet concentration predicted by the Osiptsov method was up to two orders of magnitude greater than that predicted by the Eulerian method; a significant variation for an approach widely used in engineering applications. Based on the results of these comparisons, the Osiptsov method has resulted in a more precise description of the local properties of the inertial droplet flow. The method has been applied to the analysis of the results of experimental observations of a liquid gasoline spray at representative fuel injection pressure conditions. The preliminary results show good qualitative agreement between the predictions of the model and experimental data.

Keywords: internal combustion engines, Eulerian approach, fully Lagrangian approach, gasoline fuel sprays, droplets and particle concentrations

Procedia PDF Downloads 247
18403 Kinetic Façade Design Using 3D Scanning to Convert Physical Models into Digital Models

Authors: Do-Jin Jang, Sung-Ah Kim

Abstract:

In designing a kinetic façade, it is hard for the designer to make digital models due to its complex geometry with motion. This paper aims to present a methodology of converting a point cloud of a physical model into a single digital model with a certain topology and motion. The method uses a Microsoft Kinect sensor, and color markers were defined and applied to three paper folding-inspired designs. Although the resulted digital model cannot represent the whole folding range of the physical model, the method supports the designer to conduct a performance-oriented design process with the rough physical model in the reduced folding range.

Keywords: design media, kinetic facades, tangible user interface, 3D scanning

Procedia PDF Downloads 404