Search results for: finite state method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25278

Search results for: finite state method

17688 On the Performance of Improvised Generalized M-Estimator in the Presence of High Leverage Collinearity Enhancing Observations

Authors: Habshah Midi, Mohammed A. Mohammed, Sohel Rana

Abstract:

Multicollinearity occurs when two or more independent variables in a multiple linear regression model are highly correlated. The ridge regression is the commonly used method to rectify this problem. However, the ridge regression cannot handle the problem of multicollinearity which is caused by high leverage collinearity enhancing observation (HLCEO). Since high leverage points (HLPs) are responsible for inducing multicollinearity, the effect of HLPs needs to be reduced by using Generalized M estimator. The existing GM6 estimator is based on the Minimum Volume Ellipsoid (MVE) which tends to swamp some low leverage points. Hence an improvised GM (MGM) estimator is presented to improve the precision of the GM6 estimator. Numerical example and simulation study are presented to show how HLPs can cause multicollinearity. The numerical results show that our MGM estimator is the most efficient method compared to some existing methods.

Keywords: identification, high leverage points, multicollinearity, GM-estimator, DRGP, DFFITS

Procedia PDF Downloads 242
17687 X-Corner Detection for Camera Calibration Using Saddle Points

Authors: Abdulrahman S. Alturki, John S. Loomis

Abstract:

This paper discusses a corner detection algorithm for camera calibration. Calibration is a necessary step in many computer vision and image processing applications. Robust corner detection for an image of a checkerboard is required to determine intrinsic and extrinsic parameters. In this paper, an algorithm for fully automatic and robust X-corner detection is presented. Checkerboard corner points are automatically found in each image without user interaction or any prior information regarding the number of rows or columns. The approach represents each X-corner with a quadratic fitting function. Using the fact that the X-corners are saddle points, the coefficients in the fitting function are used to identify each corner location. The automation of this process greatly simplifies calibration. Our method is robust against noise and different camera orientations. Experimental analysis shows the accuracy of our method using actual images acquired at different camera locations and orientations.

Keywords: camera calibration, corner detector, edge detector, saddle points

Procedia PDF Downloads 394
17686 Constructions of Linear and Robust Codes Based on Wavelet Decompositions

Authors: Alla Levina, Sergey Taranov

Abstract:

The classical approach to the providing noise immunity and integrity of information that process in computing devices and communication channels is to use linear codes. Linear codes have fast and efficient algorithms of encoding and decoding information, but this codes concentrate their detect and correct abilities in certain error configurations. To protect against any configuration of errors at predetermined probability can robust codes. This is accomplished by the use of perfect nonlinear and almost perfect nonlinear functions to calculate the code redundancy. The paper presents the error-correcting coding scheme using biorthogonal wavelet transform. Wavelet transform applied in various fields of science. Some of the wavelet applications are cleaning of signal from noise, data compression, spectral analysis of the signal components. The article suggests methods for constructing linear codes based on wavelet decomposition. For developed constructions we build generator and check matrix that contain the scaling function coefficients of wavelet. Based on linear wavelet codes we develop robust codes that provide uniform protection against all errors. In article we propose two constructions of robust code. The first class of robust code is based on multiplicative inverse in finite field. In the second robust code construction the redundancy part is a cube of information part. Also, this paper investigates the characteristics of proposed robust and linear codes.

Keywords: robust code, linear code, wavelet decomposition, scaling function, error masking probability

Procedia PDF Downloads 479
17685 A Finite Elements Model for the Study of Buried Pipelines Affected by Strike-Slip Fault

Authors: Reza Akbari, Jalal MontazeriFashtali, PeymanMomeni Taromsari

Abstract:

Pipeline systems, play an important role as a vital element in reducing or increasing the risk of earthquake damage and vulnerability. Pipelines are suitable, cheap, fast, and safe routes for transporting oil, gas, water, sewage, etc. The sepipelines must pass from a wide geographical area; hence they will structurally face different environmental and underground factors of earthquake forces’ effect. Therefore, structural engineering analysis and design for this type of lines requires the understanding of relevant parameters behavior and lack of familiarity with them can cause irreparable damages and risks to design and execution, especially in the face of earthquakes. Today, buried pipelines play an important role in human life cycle, thus, studying the vulnerability of pipeline systems is of particular importance. This study examines the behavior of buried pipelines affected by strike-slip fault. Studied fault is perpendicular to the tube axis and causes stress and deformation in the tube by sliding horizontally. In this study, the pipe-soil interaction is accurately simulated, so that one can examine the large displacements and strains, nonlinear material behavior and contact and friction conditions of soil and pipe. The results can be used for designing buried pipes and determining the amount of fault displacement that causes the failure of the buried pipes.

Keywords: pipe lines , earthquake , fault , soil-fault interaction

Procedia PDF Downloads 437
17684 Impact of Integrated Watershed Management Programme Based on Four Waters Concept: A Case Study of Sali Village, Rajasthan State of India

Authors: Garima Sharma, R. N. Sharma

Abstract:

Integrated watershed management programme based on 'Four Water Concept' was implemented in Sali village, in Jaipur District, Rajasthan State of India . The latitude 26.7234486 North and longitude 75.023876 East are the geocoordinate of the Sali. 'Four Waters Concept' is evolved by integrating the 'Four Waters', viz. rain water, soil moisture, ground water and surface water This methodology involves various water harvesting techniques to prevent the runoff of water by treatment of catchment, proper utilization of available water harvesting structures, renovation of the non-functional water harvesting structures and creation of new water harvesting structures. The case study included questionnaire survey from farmers and continuous study of village for two years. The total project area is 6153 Hac, and the project cost is Rs. 92.25 million. The sanctioned area of Sali Micro watershed is 2228 Hac with an outlay of Rs. 10.52 million. Watershed treatment activities such as water absorption trench, continuous contour trench, field bunding, check dams, were undertaken on agricultural lands for soil and water conservation. These measures have contributed in preventing runoff and increased the perennial availability of water in wells. According to the survey, water level in open wells in the area has risen by approximately 5 metres after the introduction of water harvesting structures. The continuous availability of water in wells has increased the area under irrigation and helped in crop diversification. Watershed management activities have brought the changes in cropping patterns and crop productivity. It helped in transforming 567 Hac culturable waste land into culturable arable land in the village. The farmers of village have created an additional income from the increased crop production. The programme also assured the availability of water during peak summers for the day to day activities of villagers. The outcomes indicate that there is positive impact of watershed management practices on the water resource potential as well the crop production of the area. This suggests that persistent efforts in this direction may lead to sustainability of the watershed.

Keywords: four water concept, groundwater potential, irrigation potential, watershed management

Procedia PDF Downloads 340
17683 Interoperable Design Coordination Method for Sharing Communication Information Using Building Information Model Collaboration Format

Authors: Jin Gang Lee, Hyun-Soo Lee, Moonseo Park

Abstract:

The utilization of BIM and IFC allows project participants to collaborate across different areas by consistently sharing interoperable product information represented in a model. Comments or markups generated during the coordination process can be categorized as communication information, which can be shared in less standardized manner. It can be difficult to manage and reuse such information compared to the product information in a model. The present study proposes an interoperable coordination method using BCF (the BIM Collaboration Format) for managing and sharing the communication information during BIM based coordination process. A management function for coordination in the BIM collaboration system is developed to assess its ability to share the communication information in BIM collaboration projects. This approach systematically links communication information during the coordination process to the building model and serves as a type of storage system for retrieving knowledge created during BIM collaboration projects.

Keywords: design coordination, building information model, BIM collaboration format, industry foundation classes

Procedia PDF Downloads 409
17682 Cryptocurrency as a Payment Method in the Tourism Industry: A Comparison of Volatility, Correlation and Portfolio Performance

Authors: Shu-Han Hsu, Jiho Yoon, Chwen Sheu

Abstract:

With the rapidly growing of blockchain technology and cryptocurrency, various industries which include tourism has added in cryptocurrency as the payment method of their transaction. More and more tourism companies accept payments in digital currency for flights, hotel reservations, transportation, and more. For travellers and tourists, using cryptocurrency as a payment method has become a way to circumvent costs and prevent risks. Understanding volatility dynamics and interdependencies between standard currency and cryptocurrency is important for appropriate financial risk management to assist policy-makers and investors in marking more informed decisions. The purpose of this paper has been to understand and explain the risk spillover effects between six major cryptocurrencies and the top ten most traded standard currencies. Using data for the daily closing price of cryptocurrencies and currency exchange rates from 7 August 2015 to 10 December 2019, with 1,133 observations. The diagonal BEKK model was used to analyze the co-volatility spillover effects between cryptocurrency returns and exchange rate returns, which are measures of how the shocks to returns in different assets affect each other’s subsequent volatility. The empirical results show there are co-volatility spillover effects between the cryptocurrency returns and GBP/USD, CNY/USD and MXN/USD exchange rate returns. Therefore, currencies (British Pound, Chinese Yuan and Mexican Peso) and cryptocurrencies (Bitcoin, Ethereum, Ripple, Tether, Litecoin and Stellar) are suitable for constructing a financial portfolio from an optimal risk management perspective and also for dynamic hedging purposes.

Keywords: blockchain, co-volatility effects, cryptocurrencies, diagonal BEKK model, exchange rates, risk spillovers

Procedia PDF Downloads 128
17681 Dynamical Heterogeneity and Aging in Turbulence with a Nambu-Goldstone Mode

Authors: Fahrudin Nugroho, Halim Hamadi, Yusril Yusuf, Pekik Nurwantoro, Ari Setiawan, Yoshiki Hidaka

Abstract:

We investigate the Nikolaevskiy equation numerically using exponential time differencing method and pseudo-spectral method. This equation develops a long-wavelength modulation that behaves as a Nambu–Goldstone mode, and short-wavelength instability and exhibit turbulence. Using the autocorrelation analysis, the statistical properties of the turbulence governed by the equation are investigated. The autocorrelation then has been fitted with The Kohlrausch– Williams–Watts (KWW) expression. By varying the control parameter, we show a transition from compressed to stretched exponential for the auto-correlation function of Nikolaevskiy turbulence. The compressed exponential is an indicator of the existence of dynamical heterogeneity while the stretched indicates aging process. Thereby, we revealed the existence of dynamical heterogeneity and aging in the turbulence governed by Nikolaevskiy equation.

Keywords: compressed exponential, dynamical heterogeneity, Nikolaevskiy equation, stretched exponential, turbulence

Procedia PDF Downloads 424
17680 Design and Control of an Integrated Plant for Simultaneous Production of γ-Butyrolactone and 2-Methyl Furan

Authors: Ahtesham Javaid, Costin S. Bildea

Abstract:

The design and plantwide control of an integrated plant where the endothermic 1,4-butanediol dehydrogenation and the exothermic furfural hydrogenation is simultaneously performed in a single reactor is studied. The reactions can be carried out in an adiabatic reactor using small hydrogen excess and with reduced parameter sensitivity. The plant is robust and flexible enough to allow different production rates of γ-butyrolactone and 2-methyl furan, keeping high product purities. Rigorous steady state and dynamic simulations performed in AspenPlus and AspenDynamics to support the conclusions.

Keywords: dehydrogenation and hydrogenation, reaction coupling, design and control, process integration

Procedia PDF Downloads 328
17679 Towards a Sustainable Energy Future: Method Used in Existing Buildings to Implement Sustainable Energy Technologies

Authors: Georgi Vendramin, Aurea Lúcia, Yamamoto, Carlos Itsuo, Souza Melegari, N. Samuel

Abstract:

This article describes the development of a model that uses a method where openings are represented by single glass and double glass. The model is based on a healthy balance equations purely theoretical and empirical data. Simplified equations are derived through a synthesis of the measured data obtained from meteorological stations. The implementation of the model in a design tool integrated buildings is discussed in this article, to better punctuate the requirements of comfort and energy efficiency in architecture and engineering. Sustainability, energy efficiency, and the integration of alternative energy systems and concepts are beginning to be incorporated into designs for new buildings and renovations to existing buildings. Few means have existed to effectively validate the potential performance benefits of the design concepts. It was used a method of degree-days for an assessment of the energy performance of a building showed that the design of the architectural design should always be considered the materials used and the size of the openings. The energy performance was obtained through the model, considering the location of the building Central Park Shopping Mall, in the city of Cascavel - PR. Obtained climatic data of these locations and in a second step, it was obtained the coefficient of total heat loss in the building pre-established so evaluating the thermal comfort and energy performance. This means that the more openings in buildings in Cascavel – PR, installed to the east side, they may be higher because the glass added to the geometry of architectural spaces will cause the environment conserve energy.

Keywords: sustainable design, energy modeling, design validation, degree-days methods

Procedia PDF Downloads 400
17678 Linear Stability of Convection in an Inclined Channel with Nanofluid Saturated Porous Medium

Authors: D. Srinivasacharya, Nidhi Humnekar

Abstract:

The goal of this research is to numerically investigate the convection of nanofluid flow in an inclined porous channel. Brownian motion and thermophoresis effects are accounted for by nanofluid. In addition, the flow in the porous region governs Brinkman’s equation. The perturbed state of the generalized eigenvalue problem is obtained using normal mode analysis, and Chebyshev spectral collocation was used to solve this problem. For various values of the governing parameters, the critical wavenumber and critical Rayleigh number are calculated, and preferred modes are identified.

Keywords: Brinkman model, inclined channel, nanofluid, linear stability, porous media

Procedia PDF Downloads 100
17677 Jitter Based Reconstruction of Transmission Line Pulse Using On-Chip Sensor

Authors: Bhuvnesh Narayanan, Bernhard Weiss, Tvrtko Mandic, Adrijan Baric

Abstract:

This paper discusses a method to reconstruct internal high-frequency signals through subsampling techniques in an IC using an on-chip sensor. Though there are existing methods to internally probe and reconstruct high frequency signals through subsampling techniques; these methods have been applicable mainly for synchronized systems. This paper demonstrates a method for making such non-intrusive on-chip reconstructions possible also in non-synchronized systems. The TLP pulse is used to demonstrate the experimental validation of the concept. The on-chip sensor measures the voltage in an internal node. The jitter in the input pulse causes a varying pulse delay with respect to the on-chip sampling command. By measuring this pulse delay and by correlating it with the measured on-chip voltage, time domain waveforms can be reconstructed, and the influence of the pulse on the internal nodes can be better understood.

Keywords: on-chip sensor, jitter, transmission line pulse, subsampling

Procedia PDF Downloads 130
17676 Improving Tower Grounding and Insulation Level vs. Line Surge Arresters for Protection of Subtransmission Lines

Authors: Navid Eghtedarpour, Mohammad Reza Hasani

Abstract:

Since renewable wind power plants are usually installed in mountain regions and high-level lands, they are often prone to lightning strikes and their hazardous effects. Although the transmission line is protected using guard wires in order to prevent the lightning surges to strike the phase conductors, the back-flashover may also occur due to tower footing resistance. A combination of back-flashover corrective methods, tower-footing resistance reduction, insulation level improvement, and line arrester installation, are analyzed in this paper for back-flashover rate reduction of a double-circuit 63 kV line in the south region of Fars province. The line crosses a mountain region in some sections with a moderate keraunic level, whereas tower-footing resistance is substantially high at some towers. Consequently, an exceptionally high back-flashover rate is recorded. A new method for insulation improvement is studied and employed in the current study. The method consists of using a composite-type creepage extender in the string. The effectiveness of this method for insulation improvement of the string is evaluated through the experimental test. Simulation results besides monitoring the one-year operation of the 63-kV line show that due to technical, practical, and economic restrictions in operated sub-transmission lines, a combination of corrective methods can lead to an effective solution for the protection of transmission lines against lightning.

Keywords: lightning protection, BF rate, grounding system, insulation level, line surge arrester

Procedia PDF Downloads 117
17675 Improving Psychological Safety in Teaching and Social Organizations in Finland

Authors: Eija Raatikainen

Abstract:

The aim of the study is to examine psychological safety in the context of change in working life and continuous learning in social- and educational organizations. The participants in the study are social workers and vocational teachers working as employees and supervisors in the capital region of Finland (public and private sectors). Research data has been collected during 2022-2023 using the qualitative method called empathy-based stories (MEBS). Research participants were asked to write short stories about situations related to their work and work community. As researchers, we created and varied the framework narratives (MEBS) in line with the aim of the study and theoretical background. The data were analyzed with content analysis. According to the results, the barriers and prerequisites for psychological safety at work could be located in four different working culture dimensions. The work culture dimensions were named as follows: 1) a work culture focusing on interaction and emotional culture between colleagues, 2) communal work culture, 3) a work culture that enables learning, and 4) a work culture focused on structures and operating models. All these have detailed elements of barriers and prerequisites of psychological safety at work. The results derived from the enlivening methods can be utilized when working with the work community and have discussed psychological safety at work. Also, the method itself (MEBS) can prevent open discussion and reflection on psychological safety at work because of the sensitivity of the topic. Method aloud to imagine, not just talk and share your experiences directly. Additionally, the results of the study can offer one tool or framework while developing phycological safety at work.

Keywords: psychological safety, empathy, empathy-based stories, working life

Procedia PDF Downloads 57
17674 Studying the Intercalation of Low Density Polyethylene/Clay Nanocomposites after Different UV Exposures

Authors: Samir Al-Zobaidi

Abstract:

This study attempts to understand the effect of different UV irradiation methods on the intercalation of LDPE/MMT nanocomposites, and its molecular behavior at certain isothermal crystallization temperature. Three different methods of UV exposure were employed using single composition of LDPE/MMT nanocomposites. All samples were annealed for 5 hours at a crystallization temperature of 100°C. The crystallization temperature was chosen to be at large supercooling temperature to ensure quick and complete crystallization. The raw material of LDPE consisted of two stable monoclinic and orthorhombic phases according to XRD results. The thermal behavior of both phases acted differently when UV exposure method was changed. The monoclinic phase was more dependent on the method used compared to the orthorhombic phase. The intercalation of clay, as well as, the non-isothermal crystallization temperature, has also shown a clear dependency on the type of UV exposure. A third phase that is thermally less stable was also observed. Its respond to UV irradiation was greater since it contains low molecular weight entities which make it more vulnerable to any UV exposure.

Keywords: LDPE/MMt nanocomposites, crystallization, UV irradiation, intercalation

Procedia PDF Downloads 363
17673 Flood Early Warning and Management System

Authors: Yogesh Kumar Singh, T. S. Murugesh Prabhu, Upasana Dutta, Girishchandra Yendargaye, Rahul Yadav, Rohini Gopinath Kale, Binay Kumar, Manoj Khare

Abstract:

The Indian subcontinent is severely affected by floods that cause intense irreversible devastation to crops and livelihoods. With increased incidences of floods and their related catastrophes, an Early Warning System for Flood Prediction and an efficient Flood Management System for the river basins of India is a must. Accurately modeled hydrological conditions and a web-based early warning system may significantly reduce economic losses incurred due to floods and enable end users to issue advisories with better lead time. This study describes the design and development of an EWS-FP using advanced computational tools/methods, viz. High-Performance Computing (HPC), Remote Sensing, GIS technologies, and open-source tools for the Mahanadi River Basin of India. The flood prediction is based on a robust 2D hydrodynamic model, which solves shallow water equations using the finite volume method. Considering the complexity of the hydrological modeling and the size of the basins in India, it is always a tug of war between better forecast lead time and optimal resolution at which the simulations are to be run. High-performance computing technology provides a good computational means to overcome this issue for the construction of national-level or basin-level flash flood warning systems having a high resolution at local-level warning analysis with a better lead time. High-performance computers with capacities at the order of teraflops and petaflops prove useful while running simulations on such big areas at optimum resolutions. In this study, a free and open-source, HPC-based 2-D hydrodynamic model, with the capability to simulate rainfall run-off, river routing, and tidal forcing, is used. The model was tested for a part of the Mahanadi River Basin (Mahanadi Delta) with actual and predicted discharge, rainfall, and tide data. The simulation time was reduced from 8 hrs to 3 hrs by increasing CPU nodes from 45 to 135, which shows good scalability and performance enhancement. The simulated flood inundation spread and stage were compared with SAR data and CWC Observed Gauge data, respectively. The system shows good accuracy and better lead time suitable for flood forecasting in near-real-time. To disseminate warning to the end user, a network-enabled solution is developed using open-source software. The system has query-based flood damage assessment modules with outputs in the form of spatial maps and statistical databases. System effectively facilitates the management of post-disaster activities caused due to floods, like displaying spatial maps of the area affected, inundated roads, etc., and maintains a steady flow of information at all levels with different access rights depending upon the criticality of the information. It is designed to facilitate users in managing information related to flooding during critical flood seasons and analyzing the extent of the damage.

Keywords: flood, modeling, HPC, FOSS

Procedia PDF Downloads 77
17672 Assessment of Interior Environmental Quality and Airborne Infectious Risk in a Commuter Bus Cabin by Using Computational Fluid Dynamics with Computer Simulated Person

Authors: Yutaro Kyuma, Sung-Jun Yoo, Kazuhide Ito

Abstract:

A commuter bus remains important as a means to network public transportation between railway stations and terminals within cities. In some cases, the boarding time becomes longer, and the boarding rate tends to be higher corresponding to the development of urban cities. The interior environmental quality, e.g. temperature and air quality, in a commuter bus is relatively heterogeneous and complex compared to that of an indoor environment in buildings due to several factors: solar radiative heat – which comes from large-area windows –, inadequate ventilation rate caused by high density of commuters, and metabolic heat generation from travelers themselves. In addition to this, under conditions where many passengers ride in the enclosed space, contact and airborne infectious risk have attracted considerable attention in terms of public health. From this point of view, it is essential to develop the prediction method for assessment of interior environmental quality and infection risk in commuter bus cabins. In this study, we developed a numerical commuter bus model integrated with computer simulated persons to reproduce realistic indoor environment conditions with high occupancy during commuting. Here, computer simulated persons were newly designed considering different types of geometries, e.g., standing position, seating position, and individual differences. Here we conducted coupled computational fluid dynamics (CFD) analysis with radiative heat transfer analysis under steady state condition. Distributions of heterogeneous air flow patterns, temperature, and moisture surrounding the human body under some different ventilation system were analyzed by using CFD technique, and skin surface temperature distributions were analyzed using thermoregulation model that integrated into computer simulated person. Through these analyses, we discussed the interior environmental quality in specific commuter bus cabins. Further, inhaled air quality of each passenger was also analyzed. This study may have possibility to design the ventilation system in bus for improving thermal comfort of occupants.

Keywords: computational fluid dynamics, CFD, computer simulated person, CSP, contaminant, indoor environment, public health, ventilation

Procedia PDF Downloads 238
17671 Speciation, Preconcentration, and Determination of Iron(II) and (III) Using 1,10-Phenanthroline Immobilized on Alumina-Coated Magnetite Nanoparticles as a Solid Phase Extraction Sorbent in Pharmaceutical Products

Authors: Hossein Tavallali, Mohammad Ali Karimi, Gohar Deilamy-Rad

Abstract:

The proposed method for speciation, preconcentration and determination of Fe(II) and Fe(III) in pharmaceutical products was developed using of alumina-coated magnetite nanoparticles (Fe3O4/Al2O3 NPs) as solid phase extraction (SPE) sorbent in magnetic mixed hemimicell solid phase extraction (MMHSPE) technique followed by flame atomic absorption spectrometry analysis. The procedure is based on complexation of Fe(II) with 1, 10-phenanthroline (OP) as complexing reagent for Fe(II) that immobilized on the modified Fe3O4/Al2O3 NPs. The extraction and concentration process for pharmaceutical sample was carried out in a single step by mixing the extraction solvent, magnetic adsorbents under ultrasonic action. Then, the adsorbents were isolated from the complicated matrix easily with an external magnetic field. Fe(III) ions determined after facility reduced to Fe(II) by added a proper reduction agent to sample solutions. Compared with traditional methods, the MMHSPE method simplified the operation procedure and reduced the analysis time. Various influencing parameters on the speciation and preconcentration of trace iron, such as pH, sample volume, amount of sorbent, type and concentration of eluent, were studied. Under the optimized operating conditions, the preconcentration factor of the modified nano magnetite for Fe(II) 167 sample was obtained. The detection limits and linear range of this method for iron were 1.0 and 9.0 - 175 ng.mL−1, respectively. Also the relative standard deviation for five replicate determinations of 30.00 ng.mL-1 Fe2+ was 2.3%.

Keywords: Alumina-Coated magnetite nanoparticles, Magnetic Mixed Hemimicell Solid-Phase Extraction, Fe(ΙΙ) and Fe(ΙΙΙ), pharmaceutical sample

Procedia PDF Downloads 282
17670 Improvement and Miniaturization RFID Patch Antenna by Inclusion the Complementary Metamaterials

Authors: Seif Naoui, Lassaad Latrach, Ali Gharsallah

Abstract:

This paper is specialized to highlight the method of miniaturization and improvement the patch antenna by using the complementary metamaterial. This method is presented by a simple technique is composed a structure of patch antenna integrated in its surface a cell of complementary split ring resonator. This resonator is placed at the middle of the radiating patch in parallel with the transmission line and with a variable angle of orientation. The objective is to find the ultimate angle where the best results are obtained on improving the characteristics of the considered antenna. This motif widespread at the traceability applications by wireless communication for RFID technology at the operation frequency 2.45 GHz. Our contribution is based on studies empirical often presented in this article. All simulation results were made by the CST Microwave Studio.

Keywords: complimentary split ring resonators, computer simulation technology microwave studio, metamaterials patch antennas, microstrip patch antenna, radio frequency identification

Procedia PDF Downloads 428
17669 The Use of the Limit Cycles of Dynamic Systems for Formation of Program Trajectories of Points Feet of the Anthropomorphous Robot

Authors: A. S. Gorobtsov, A. S. Polyanina, A. E. Andreev

Abstract:

The movement of points feet of the anthropomorphous robot in space occurs along some stable trajectory of a known form. A large number of modifications to the methods of control of biped robots indicate the fundamental complexity of the problem of stability of the program trajectory and, consequently, the stability of the control for the deviation for this trajectory. Existing gait generators use piecewise interpolation of program trajectories. This leads to jumps in the acceleration at the boundaries of sites. Another interpolation can be realized using differential equations with fractional derivatives. In work, the approach to synthesis of generators of program trajectories is considered. The resulting system of nonlinear differential equations describes a smooth trajectory of movement having rectilinear sites. The method is based on the theory of an asymptotic stability of invariant sets. The stability of such systems in the area of localization of oscillatory processes is investigated. The boundary of the area is a bounded closed surface. In the corresponding subspaces of the oscillatory circuits, the resulting stable limit cycles are curves having rectilinear sites. The solution of the problem is carried out by means of synthesis of a set of the continuous smooth controls with feedback. The necessary geometry of closed trajectories of movement is obtained due to the introduction of high-order nonlinearities in the control of stabilization systems. The offered method was used for the generation of trajectories of movement of point’s feet of the anthropomorphous robot. The synthesis of the robot's program movement was carried out by means of the inverse method.

Keywords: control, limits cycle, robot, stability

Procedia PDF Downloads 313
17668 Whale Optimization Algorithm for Optimal Reactive Power Dispatch Solution Under Various Contingency Conditions

Authors: Medani Khaled Ben Oualid

Abstract:

Most of researchers solved and analyzed the ORPD problem in the normal conditions. However, network collapses appear in contingency conditions. In this paper, ORPD under several contingencies is presented using the proposed method WOA. To ensure viability of the power system in contingency conditions, several critical cases are simulated in order to prevent and prepare the power system to face such situations. The results obtained are carried out in IEEE 30 bus test system for the solution of ORPD problem in which control of bus voltages, tap position of transformers and reactive power sources are involved. Moreover, another method, namely, Particle Swarm Optimization with Time Varying Acceleration Coefficient (PSO-TVAC) has been compared with the proposed technique. Simulation results indicate that the proposed WOA gives remarkable solution in terms of effectiveness in case of outages.

Keywords: optimal reactive power dispatch, metaheuristic techniques, whale optimization algorithm, real power loss minimization, contingency conditions

Procedia PDF Downloads 80
17667 A Modest Proposal for Deep-Sixing Propositions in the Philosophy of Language

Authors: Patrick Duffley

Abstract:

Hanks (2021) identifies three Frege-inspired commitments concerning propositions that are widely shared across the philosophy of language: (1) propositions are the primary, inherent bearers of representational properties and truth-conditions; (2) propositions are neutral representations possessing a ‘content’ that is devoid of ‘force; (3) propositions can be entertained or expressed without being asserted. Hanks then argues that the postulate of neutral content must be abandoned, and the primary bearers of truth-evaluable representation must be identified as the token acts of assertoric predication that people perform when they are thinking or speaking about the world. Propositions are ‘types of acts of predication, which derive their representational features from their tokens.’ Their role is that of ‘classificatory devices that we use for the purposes of identifying and individuating mental states and speech acts,’ so that ‘to say that Russell believes that Mont Blanc is over 4000 meters high is to classify Russell’s mental state under a certain type, and thereby distinguish that mental state from others that Russell might possess.’ It is argued in this paper that there is no need to classify an utterance of 'Russell believes that Mont Blanc is over 4000 meters high' as a token of some higher-order utterance-type in order to identify what Russell believes; the meanings of the words themselves and the syntactico-semantic relations between them are sufficient. In our view what Hanks has accomplished in effect is to build a convincing argument for dispensing with propositions completely in the philosophy of language. By divesting propositions of the role of being the primary bearers of representational properties and truth-conditions and fittingly transferring this role to the token acts of predication that people perform when they are thinking or speaking about the world, he has situated truth in its proper place and obviated any need for abstractions like propositions to explain how language can express things that are true. This leaves propositions with the extremely modest role of classifying mental states and speech acts for the purposes of identifying and individuating them. It is demonstrated here however that there is no need whatsoever to posit such abstract entities to explain how people identify and individuate such states/acts. We therefore make the modest proposal that the term ‘proposition’ be stricken from the vocabulary of philosophers of language.

Keywords: propositions, truth-conditions, predication, Frege, truth-bearers

Procedia PDF Downloads 46
17666 Legal Issues of Food Security in Republic of Kazakhstan

Authors: G. T. Aigarinova

Abstract:

This article considers the legal issues of food security as a major component of national security of the republic. The problem of food security is the top priority of the economic policy strategy of any state, the effectiveness of this solution influences social, political, and ethnic stability in society. Food security and nutrition is everyone’s business. Food security exists when all people, at all times, have physical, social and economic access to sufficient safe and nutritious food that meets their dietary needs and food preferences for an active and healthy life. By analyzing the existing legislation in the area of food security, the author identifies weaknesses and gaps, suggesting ways to improve it.

Keywords: food security, national security, agriculture, public resources, economic security

Procedia PDF Downloads 404
17665 Incorporating Information Gain in Regular Expressions Based Classifiers

Authors: Rosa L. Figueroa, Christopher A. Flores, Qing Zeng-Treitler

Abstract:

A regular expression consists of sequence characters which allow describing a text path. Usually, in clinical research, regular expressions are manually created by programmers together with domain experts. Lately, there have been several efforts to investigate how to generate them automatically. This article presents a text classification algorithm based on regexes. The algorithm named REX was designed, and then, implemented as a simplified method to create regexes to classify Spanish text automatically. In order to classify ambiguous cases, such as, when multiple labels are assigned to a testing example, REX includes an information gain method Two sets of data were used to evaluate the algorithm’s effectiveness in clinical text classification tasks. The results indicate that the regular expression based classifier proposed in this work performs statically better regarding accuracy and F-measure than Support Vector Machine and Naïve Bayes for both datasets.

Keywords: information gain, regular expressions, smith-waterman algorithm, text classification

Procedia PDF Downloads 304
17664 Automated, Objective Assessment of Pilot Performance in Simulated Environment

Authors: Maciej Zasuwa, Grzegorz Ptasinski, Antoni Kopyt

Abstract:

Nowadays flight simulators offer tremendous possibilities for safe and cost-effective pilot training, by utilization of powerful, computational tools. Due to technology outpacing methodology, vast majority of training related work is done by human instructors. It makes assessment not efficient, and vulnerable to instructors’ subjectivity. The research presents an Objective Assessment Tool (gOAT) developed at the Warsaw University of Technology, and tested on SW-4 helicopter flight simulator. The tool uses database of the predefined manoeuvres, defined and integrated to the virtual environment. These were implemented, basing on Aeronautical Design Standard Performance Specification Handling Qualities Requirements for Military Rotorcraft (ADS-33), with predefined Mission-Task-Elements (MTEs). The core element of the gOAT enhanced algorithm that provides instructor a new set of information. In details, a set of objective flight parameters fused with report about psychophysical state of the pilot. While the pilot performs the task, the gOAT system automatically calculates performance using the embedded algorithms, data registered by the simulator software (position, orientation, velocity, etc.), as well as measurements of physiological changes of pilot’s psychophysiological state (temperature, sweating, heart rate). Complete set of measurements is presented on-line to instructor’s station and shown in dedicated graphical interface. The presented tool is based on open source solutions, and flexible for editing. Additional manoeuvres can be easily added using guide developed by authors, and MTEs can be changed by instructor even during an exercise. Algorithm and measurements used allow not only to implement basic stress level measurements, but also to reduce instructor’s workload significantly. Tool developed can be used for training purpose, as well as periodical checks of the aircrew. Flexibility and ease of modifications allow the further development to be wide ranged, and the tool to be customized. Depending on simulation purpose, gOAT can be adjusted to support simulator of aircraft, helicopter, or unmanned aerial vehicle (UAV).

Keywords: automated assessment, flight simulator, human factors, pilot training

Procedia PDF Downloads 134
17663 Dynamic Stall Characterization of Low Reynolds Airfoil in Mars and Titan’s Atmosphere

Authors: Vatasta Koul, Vaibhav Sharma, Ayush Gupta, Rajesh Yadav

Abstract:

Exploratory missions to Mars and Titan have increased recently with various endeavors to find an alternate home to humankind. The use of surface rovers has its limitations due to rugged and uneven surfaces of these planetary bodies. The use of aerial robots requires the complete aerodynamic characterization of these vehicles in the atmospheric conditions of these planetary bodies. The dynamic stall phenomenon is extremely important for rotary wings performance under low Reynolds number that can be encountered in Martian and Titan’s atmosphere. The current research focuses on the aerodynamic characterization and exploration of the dynamic stall phenomenon of two different airfoils viz. E387 and Selig-Donovan7003 in Martian and Titan’s atmosphere at low Reynolds numbers of 10000 and 50000. The two-dimensional numerical simulations are conducted using commercially available finite volume solver with multi-species non-reacting mixture of gases as the working fluid. The k-epsilon (k-ε) turbulence model is used to capture the unsteady flow separation and the effect of turbulence. The dynamic characteristics are studied at a fixed different constant rotational extreme of angles of attack. This study of airfoils at different low Reynolds number and atmospheric conditions on Mars and Titan will be resulting in defining the aerodynamic characteristics of these airfoils for unmanned aerial missions for outer space exploration.

Keywords: aerodynamics, dynamic stall, E387, SD7003

Procedia PDF Downloads 119
17662 Low-Temperature Silanization of Medical Vials: Chemical Bonding and Performance

Authors: Yuanping Yang, Ruolin Zhou, Xingyu Liu, Lianbin Wu

Abstract:

Based on the challenges of silanization of pharmaceutical glass packaging materials, the silicone oil high-temperature baking method consumes a lot of energy; silicone oil is generally physically adsorbed on the inner surface of the medical vials, leading to protein adsorption on the surface of the silicone oil and fall off, so that the number of particles in the drug solution increases, which brings potential risks to people. In this paper, a new silanizing method is proposed. High-efficiency silanization is achieved by grafting trimethylsilyl groups to the inner surface of medical vials by chemical bond at low temperatures. The inner wall of the vial successfully obtained stable hydrophobicity, and the water contact Angle of the surface reached 100°~110°. With the increase of silicified reagent concentration, the water resistance of corresponding treatment vials increased gradually. This treatment can effectively reduce the risk of pH value increase and sodium ion leaching.

Keywords: low-temperature silanization, medical vials, chemical bonding, hydrophobicity

Procedia PDF Downloads 68
17661 Hydrogeochemical Assessment, Evaluation and Characterization of Groundwater Quality in Ore, South-Western, Nigeria

Authors: Olumuyiwa Olusola Falowo

Abstract:

One of the objectives of the Millennium Development Goals is to have sustainable access to safe drinking water and basic sanitation. In line with this objective, an assessment of groundwater quality was carried out in Odigbo Local Government Area of Ondo State in November – February, 2019 to assess the drinking, domestic and irrigation uses of the water. Samples from 30 randomly selected ground water sources; 16 shallow wells and 14 from boreholes and analyzed using American Public Health Association method for the examination of water and wastewater. Water quality index calculation, and diagrams such as Piper diagram, Gibbs diagram and Wilcox diagram have been used to assess the groundwater in conjunction with irrigation indices such as % sodium, sodium absorption ratio, permeability index, magnesium ratio, Kelly ratio, and electrical conductivity. In addition statistical Principal component analysis were used to determine the homogeneity and source(s) influencing the chemistry of the groundwater. The results show that all the parameters are within the permissible limit of World Health Organization. The physico-chemical analysis of groundwater samples indicates that the dominant major cations are in decreasing order of Na+, Ca2+, Mg2+, K+ and the dominant anions are HCO-3, Cl-, SO-24, NO-3. The values of water quality index varies suggest a Good water (WQI of 50-75) accounts for 70% of the study area. The dominant groundwater facies revealed in this study are the non-carbonate alkali (primary salinity) exceeds 50% (zone 7); and transition zone with no one cation-anion pair exceeds 50% (zone 9), while evaporation; rock–water interaction, and precipitation; and silicate weathering process are the dominant processes in the hydrogeochemical evolution of the groundwater. The study indicates that waters were found within the permissible limits of irrigation indices adopted, and plot on excellent category on Wilcox plot. In conclusion, the water in the study area are good/suitable for drinking, domestic and irrigation purposes with low equivalent salinity concentrate and moderate electrical conductivity.

Keywords: equivalent salinity concentration, groundwater quality, hydrochemical facies, principal component analysis, water-rock interaction

Procedia PDF Downloads 133
17660 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data

Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone

Abstract:

The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.

Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine

Procedia PDF Downloads 227
17659 A Method for Clinical Concept Extraction from Medical Text

Authors: Moshe Wasserblat, Jonathan Mamou, Oren Pereg

Abstract:

Natural Language Processing (NLP) has made a major leap in the last few years, in practical integration into medical solutions; for example, extracting clinical concepts from medical texts such as medical condition, medication, treatment, and symptoms. However, training and deploying those models in real environments still demands a large amount of annotated data and NLP/Machine Learning (ML) expertise, which makes this process costly and time-consuming. We present a practical and efficient method for clinical concept extraction that does not require costly labeled data nor ML expertise. The method includes three steps: Step 1- the user injects a large in-domain text corpus (e.g., PubMed). Then, the system builds a contextual model containing vector representations of concepts in the corpus, in an unsupervised manner (e.g., Phrase2Vec). Step 2- the user provides a seed set of terms representing a specific medical concept (e.g., for the concept of the symptoms, the user may provide: ‘dry mouth,’ ‘itchy skin,’ and ‘blurred vision’). Then, the system matches the seed set against the contextual model and extracts the most semantically similar terms (e.g., additional symptoms). The result is a complete set of terms related to the medical concept. Step 3 –in production, there is a need to extract medical concepts from the unseen medical text. The system extracts key-phrases from the new text, then matches them against the complete set of terms from step 2, and the most semantically similar will be annotated with the same medical concept category. As an example, the seed symptom concepts would result in the following annotation: “The patient complaints on fatigue [symptom], dry skin [symptom], and Weight loss [symptom], which can be an early sign for Diabetes.” Our evaluations show promising results for extracting concepts from medical corpora. The method allows medical analysts to easily and efficiently build taxonomies (in step 2) representing their domain-specific concepts, and automatically annotate a large number of texts (in step 3) for classification/summarization of medical reports.

Keywords: clinical concepts, concept expansion, medical records annotation, medical records summarization

Procedia PDF Downloads 122