Search results for: computational methods
14704 Urban Runoff Modeling of Ungauged Volcanic Catchment in Madinah, Western Saudi Arabia
Authors: Fahad Alahmadi, Norhan Abd Rahman, Mohammad Abdulrazzak, Zulikifli Yusop
Abstract:
Runoff prediction of ungauged catchment is still a challenging task especially in arid regions with a unique land cover such as volcanic basalt rocks where geological weathering and fractures are highly significant. In this study, Bathan catchment in Madinah western Saudi Arabia was selected for analysis. The aim of this paper is to evaluate different rainfall loss methods; soil conservation Services curve number (SCS-CN), green-ampt and initial-constant rate. Different direct runoff methods were evaluated: soil conservation services dimensionless unit hydrograph (SCS-UH), Snyder unit hydrograph and Clark unit hydrograph. The study showed the superiority of SCS-CN loss method and Clark unit hydrograph method for ungauged catchment where there is no observed runoff data.Keywords: urban runoff modelling, arid regions, ungauged catchments, volcanic rocks, Madinah, Saudi Arabia
Procedia PDF Downloads 40914703 Experimental Partial Discharge Localization for Internal Short Circuits of Transformers Windings
Authors: Jalal M. Abdallah
Abstract:
This paper presents experimental studies carried out on a three phase transformer to investigate and develop the transformer models, which help in testing procedures, describing and evaluating the transformer dielectric conditions process and methods such as: the partial discharge (PD) localization in windings. The measurements are based on the transfer function methods in transformer windings by frequency response analysis (FRA). Numbers of tests conditions were applied to obtain the sensitivity frequency responses of a transformer for different type of faults simulated in a particular phase. The frequency responses were analyzed for the sensitivity of different test conditions to detect and identify the starting of small faults, which are sources of PD. In more detail, the aim is to explain applicability and sensitivity of advanced PD measurements for small short circuits and its localization. The experimental results presented in the paper will help in understanding the sensitivity of FRA measurements in detecting various types of internal winding short circuits in the transformer.Keywords: frequency response analysis (FRA), measurements, transfer function, transformer
Procedia PDF Downloads 28414702 Mutiple Medical Landmark Detection on X-Ray Scan Using Reinforcement Learning
Authors: Vijaya Yuvaram Singh V M, Kameshwar Rao J V
Abstract:
The challenge with development of neural network based methods for medical is the availability of data. Anatomical landmark detection in the medical domain is a process to find points on the x-ray scan report of the patient. Most of the time this task is done manually by trained professionals as it requires precision and domain knowledge. Traditionally object detection based methods are used for landmark detection. Here, we utilize reinforcement learning and query based method to train a single agent capable of detecting multiple landmarks. A deep Q network agent is trained to detect single and multiple landmarks present on hip and shoulder from x-ray scan of a patient. Here a single agent is trained to find multiple landmark making it superior to having individual agents per landmark. For the initial study, five images of different patients are used as the environment and tested the agents performance on two unseen images.Keywords: reinforcement learning, medical landmark detection, multi target detection, deep neural network
Procedia PDF Downloads 14714701 Power Quality Modeling Using Recognition Learning Methods for Waveform Disturbances
Authors: Sang-Keun Moon, Hong-Rok Lim, Jin-O Kim
Abstract:
This paper presents a Power Quality (PQ) modeling and filtering processes for the distribution system disturbances using recognition learning methods. Typical PQ waveforms with mathematical applications and gathered field data are applied to the proposed models. The objective of this paper is analyzing PQ data with respect to monitoring, discriminating, and evaluating the waveform of power disturbances to ensure the system preventative system failure protections and complex system problem estimations. Examined signal filtering techniques are used for the field waveform noises and feature extractions. Using extraction and learning classification techniques, the efficiency was verified for the recognition of the PQ disturbances with focusing on interactive modeling methods in this paper. The waveform of selected 8 disturbances is modeled with randomized parameters of IEEE 1159 PQ ranges. The range, parameters, and weights are updated regarding field waveform obtained. Along with voltages, currents have same process to obtain the waveform features as the voltage apart from some of ratings and filters. Changing loads are causing the distortion in the voltage waveform due to the drawing of the different patterns of current variation. In the conclusion, PQ disturbances in the voltage and current waveforms indicate different types of patterns of variations and disturbance, and a modified technique based on the symmetrical components in time domain was proposed in this paper for the PQ disturbances detection and then classification. Our method is based on the fact that obtained waveforms from suggested trigger conditions contain potential information for abnormality detections. The extracted features are sequentially applied to estimation and recognition learning modules for further studies.Keywords: power quality recognition, PQ modeling, waveform feature extraction, disturbance trigger condition, PQ signal filtering
Procedia PDF Downloads 19114700 Forecasting Unemployment Rate in Selected European Countries Using Smoothing Methods
Authors: Ksenija Dumičić, Anita Čeh Časni, Berislav Žmuk
Abstract:
The aim of this paper is to select the most accurate forecasting method for predicting the future values of the unemployment rate in selected European countries. In order to do so, several forecasting techniques adequate for forecasting time series with trend component, were selected, namely: double exponential smoothing (also known as Holt`s method) and Holt-Winters` method which accounts for trend and seasonality. The results of the empirical analysis showed that the optimal model for forecasting unemployment rate in Greece was Holt-Winters` additive method. In the case of Spain, according to MAPE, the optimal model was double exponential smoothing model. Furthermore, for Croatia and Italy the best forecasting model for unemployment rate was Holt-Winters` multiplicative model, whereas in the case of Portugal the best model to forecast unemployment rate was Double exponential smoothing model. Our findings are in line with European Commission unemployment rate estimates.Keywords: European Union countries, exponential smoothing methods, forecast accuracy unemployment rate
Procedia PDF Downloads 37014699 An Assessment of Airport Collaborative Decision-Making System Using Predictive Maintenance
Authors: Faruk Aras, Melih Inal, Tansel Cinar
Abstract:
The coordination of airport staff especially in the operations and maintenance departments is important for the airport operation. As a result, this coordination will increase the efficiency in all operation. Therefore, a Collaborative Decision-Making (CDM) system targets on improving the overall productivity of all operations by optimizing the use of resources and improving the predictability of actions. Enlarged productivity can be of major benefit for all airport operations. It also increases cost-efficiency. This study explains how predictive maintenance using IoT (Internet of Things), predictive operations and the statistical data such as Mean Time To Failure (MTTF) improves airport terminal operations and utilize airport terminal equipment in collaboration with collaborative decision making system/Airport Operation Control Center (AOCC). Data generated by the predictive maintenance methods is retrieved and analyzed by maintenance managers to predict when a problem is about to occur. With that information, maintenance can be scheduled when needed. As an example, AOCC operator would have chance to assign a new gate that towards to this gate all the equipment such as travellator, elevator, escalator etc. are operational if the maintenance team is in collaboration with AOCC since maintenance team is aware of the health of the equipment because of predictive maintenance methods. Applying predictive maintenance methods based on analyzing the health of airport terminal equipment dramatically reduces the risk of downtime by on time repairs. We can classify the categories as high priority calls for urgent repair action, as medium priority requires repair at the earliest opportunity, and low priority allows maintenance to be scheduled when convenient. In all cases, identifying potential problems early resulted in better allocation airport terminal resources by AOCC.Keywords: airport, predictive maintenance, collaborative decision-making system, Airport Operation Control Center (AOCC)
Procedia PDF Downloads 36714698 Effect of the Applied Bias on Mini-Band Structures in Dimer Fibonacci InAs/Ga1-XInXAs Superlattices
Authors: Z. Aziz, S. Terkhi, Y. Sefir, R. Djelti, S. Bentata
Abstract:
The effect of a uniform electric field across multi-barrier systems (InAs/InxGa1-xAs) is exhaustively explored by a computational model using exact Airy function formalism and the transfer-matrix technique. In the case of biased DFHBSL structure a strong reduction in transmission properties was observed and the width of the mini-band structure linearly decreases with the increase of the applied bias. This is due to the confinement of the states in the mini-band structure, which becomes increasingly important (Wannier-Stark Effect).Keywords: dimer fibonacci height barrier superlattices, singular extended state, exact Airy function and transfer matrix formalism, bioinformatics
Procedia PDF Downloads 29314697 Formation of Miniband Structure in Dimer Fibonacci GaAs/Ga1-XAlXAs Superlattices
Authors: Aziz Zoubir, Sefir Yamina, Djelti Redouan, Bentata Samir
Abstract:
The effect of a uniform electric field across multibarrier systems (GaAs/AlxGa1-xAs) is exhaustively explored by a computational model using exact Airy function formalism and the transfer-matrix technique. In the case of biased Dimer Fibonacci Height Barrier superlattices (DFHBSL) structure a strong reduction in transmission properties was observed and the width of the miniband structure linearly decreases with the increase of the applied bias. This is due to the confinement of the states in the miniband structure, which becomes increasingly important (Wannier-Stark effect).Keywords: Dimer Fibonacci Height Barrier superlattices, singular extended states, exact Airy function, transfer matrix formalism
Procedia PDF Downloads 51214696 Size-Reduction Strategies for Iris Codes
Authors: Jutta Hämmerle-Uhl, Georg Penn, Gerhard Pötzelsberger, Andreas Uhl
Abstract:
Iris codes contain bits with different entropy. This work investigates different strategies to reduce the size of iris code templates with the aim of reducing storage requirements and computational demand in the matching process. Besides simple sub-sampling schemes, also a binary multi-resolution representation as used in the JBIG hierarchical coding mode is assessed. We find that iris code template size can be reduced significantly while maintaining recognition accuracy. Besides, we propose a two stage identification approach, using small-sized iris code templates in a pre-selection satge, and full resolution templates for final identification, which shows promising recognition behaviour.Keywords: iris recognition, compact iris code, fast matching, best bits, pre-selection identification, two-stage identification
Procedia PDF Downloads 44414695 Exploring the Situational Approach to Decision Making: User eConsent on a Health Social Network
Authors: W. Rowan, Y. O’Connor, L. Lynch, C. Heavin
Abstract:
Situation Awareness can offer the potential for conscious dynamic reflection. In an era of online health data sharing, it is becoming increasingly important that users of health social networks (HSNs) have the information necessary to make informed decisions as part of the registration process and in the provision of eConsent. This research aims to leverage an adapted Situation Awareness (SA) model to explore users’ decision making processes in the provision of eConsent. A HSN platform was used to investigate these behaviours. A mixed methods approach was taken. This involved the observation of registration behaviours followed by a questionnaire and focus group/s. Early results suggest that users are apt to automatically accept eConsent, and only later consider the long-term implications of sharing their personal health information. Further steps are required to continue developing knowledge and understanding of this important eConsent process. The next step in this research will be to develop a set of guidelines for the improved presentation of eConsent on the HSN platform.Keywords: eConsent, health social network, mixed methods, situation awareness
Procedia PDF Downloads 29814694 Dairy Wastewater Treatment by Electrochemical and Catalytic Method
Authors: Basanti Ekka, Talis Juhna
Abstract:
Dairy industrial effluents originated by the typical processing activities are composed of various organic and inorganic constituents, and these include proteins, fats, inorganic salts, antibiotics, detergents, sanitizers, pathogenic viruses, bacteria, etc. These contaminants are harmful to not only human beings but also aquatic flora and fauna. Because consisting of large classes of contaminants, the specific targeted removal methods available in the literature are not viable solutions on the industrial scale. Therefore, in this on-going research, a series of coagulation, electrochemical, and catalytic methods will be employed. The bulk coagulation and electrochemical methods can wash off most of the contaminants, but some of the harmful chemicals may slip in; therefore, specific catalysts designed and synthesized will be employed for the removal of targeted chemicals. In the context of Latvian dairy industries, presently, work is under progress on the characterization of dairy effluents by total organic carbon (TOC), Inductively Coupled Plasma Mass Spectrometry (ICP-MS)/ Inductively Coupled Plasma Optical Emission Spectrometry (ICP-OES), High-Performance Liquid Chromatography (HPLC), Gas Chromatography-Mass Spectrometry (GC-MS), and Mass Spectrometry. After careful evaluation of the dairy effluents, a cost-effective natural coagulant will be employed prior to advanced electrochemical technology such as electrocoagulation and electro-oxidation as a secondary treatment process. Finally, graphene oxide (GO) based hybrid materials will be used for post-treatment of dairy wastewater as graphene oxide has been widely applied in various fields such as environmental remediation and energy production due to the presence of various oxygen-containing groups. Modified GO will be used as a catalyst for the removal of remaining contaminants after the electrochemical process.Keywords: catalysis, dairy wastewater, electrochemical method, graphene oxide
Procedia PDF Downloads 14814693 Data Mining Spatial: Unsupervised Classification of Geographic Data
Authors: Chahrazed Zouaoui
Abstract:
In recent years, the volume of geospatial information is increasing due to the evolution of communication technologies and information, this information is presented often by geographic information systems (GIS) and stored on of spatial databases (BDS). The classical data mining revealed a weakness in knowledge extraction at these enormous amounts of data due to the particularity of these spatial entities, which are characterized by the interdependence between them (1st law of geography). This gave rise to spatial data mining. Spatial data mining is a process of analyzing geographic data, which allows the extraction of knowledge and spatial relationships from geospatial data, including methods of this process we distinguish the monothematic and thematic, geo- Clustering is one of the main tasks of spatial data mining, which is registered in the part of the monothematic method. It includes geo-spatial entities similar in the same class and it affects more dissimilar to the different classes. In other words, maximize intra-class similarity and minimize inter similarity classes. Taking account of the particularity of geo-spatial data. Two approaches to geo-clustering exist, the dynamic processing of data involves applying algorithms designed for the direct treatment of spatial data, and the approach based on the spatial data pre-processing, which consists of applying clustering algorithms classic pre-processed data (by integration of spatial relationships). This approach (based on pre-treatment) is quite complex in different cases, so the search for approximate solutions involves the use of approximation algorithms, including the algorithms we are interested in dedicated approaches (clustering methods for partitioning and methods for density) and approaching bees (biomimetic approach), our study is proposed to design very significant to this problem, using different algorithms for automatically detecting geo-spatial neighborhood in order to implement the method of geo- clustering by pre-treatment, and the application of the bees algorithm to this problem for the first time in the field of geo-spatial.Keywords: mining, GIS, geo-clustering, neighborhood
Procedia PDF Downloads 37614692 A Literature Review on Sustainability Appraisal Methods for Highway Infrastructure Projects
Authors: S. Kaira, S. Mohamed, A. Rahman
Abstract:
Traditionally, highway infrastructure projects are initiated based on their economic benefits, thereafter environmental, social and governance impacts are addressed discretely for the selected project from a set of pre-determined alternatives. When opting for cost-benefit analysis (CBA), multi-criteria decision-making (MCDM) has been used as the default assessment tool. But this tool has been critiqued as it does not mimic the real-world dynamic environment. Indeed, it is because of the fact that public sector projects like highways have to experience intense exposure to dynamic environments. Therefore, it is essential to appreciate the impacts of various dynamic factors (factors that change or progress with the system) on project performance. Thus, this paper presents various sustainability assessment tools that have been globally developed to determine sustainability performance of infrastructure projects during the design, procurement and commissioning phase. Indeed, identification of the current gaps in the available assessment methods provides a potential to add prominent part of knowledge in the field of ‘road project development systems and procedures’ that are generally used by road agencies.Keywords: dynamic impact factors, micro and macro factors, sustainability assessment framework, sustainability performance
Procedia PDF Downloads 14314691 Comparison of Bioelectric and Biomechanical Electromyography Normalization Techniques in Disparate Populations
Authors: Drew Commandeur, Ryan Brodie, Sandra Hundza, Marc Klimstra
Abstract:
The amplitude of raw electromyography (EMG) is affected by recording conditions and often requires normalization to make meaningful comparisons. Bioelectric methods normalize with an EMG signal recorded during a standardized task or from the experimental protocol itself, while biomechanical methods often involve measurements with an additional sensor such as a force transducer. Common bioelectric normalization techniques for treadmill walking include maximum voluntary isometric contraction (MVIC), dynamic EMG peak (EMGPeak) or dynamic EMG mean (EMGMean). There are several concerns with using MVICs to normalize EMG, including poor reliability and potential discomfort. A limitation of bioelectric normalization techniques is that they could result in a misrepresentation of the absolute magnitude of force generated by the muscle and impact the interpretation of EMG between functionally disparate groups. Additionally, methods that normalize to EMG recorded during the task may eliminate some real inter-individual variability due to biological variation. This study compared biomechanical and bioelectric EMG normalization techniques during treadmill walking to assess the impact of the normalization method on the functional interpretation of EMG data. For the biomechanical method, we normalized EMG to a target torque (EMGTS) and the bioelectric methods used were normalization to the mean and peak of the signal during the walking task (EMGMean and EMGPeak). The effect of normalization on muscle activation pattern, EMG amplitude, and inter-individual variability were compared between disparate cohorts of OLD (76.6 yrs N=11) and YOUNG (26.6 yrs N=11) adults. Participants walked on a treadmill at a self-selected pace while EMG was recorded from the right lower limb. EMG data from the soleus (SOL), medial gastrocnemius (MG), tibialis anterior (TA), vastus lateralis (VL), and biceps femoris (BF) were phase averaged into 16 bins (phases) representing the gait cycle with bins 1-10 associated with right stance and bins 11-16 with right swing. Pearson’s correlations showed that activation patterns across the gait cycle were similar between all methods, ranging from r =0.86 to r=1.00 with p<0.05. This indicates that each method can characterize the muscle activation pattern during walking. Repeated measures ANOVA showed a main effect for age in MG for EMGPeak but no other main effects were observed. Interactions between age*phase of EMG amplitude between YOUNG and OLD with each method resulted in different statistical interpretation between methods. EMGTS normalization characterized the fewest differences (four phases across all 5 muscles) while EMGMean (11 phases) and EMGPeak (19 phases) showed considerably more differences between cohorts. The second notable finding was that coefficient of variation, the representation of inter-individual variability, was greatest for EMGTS and lowest for EMGMean while EMGPeak was slightly higher than EMGMean for all muscles. This finding supports our expectation that EMGTS normalization would retain inter-individual variability which may be desirable, however, it also suggests that even when large differences are expected, a larger sample size may be required to observe the differences. Our findings clearly indicate that interpretation of EMG is highly dependent on the normalization method used, and it is essential to consider the strengths and limitations of each method when drawing conclusions.Keywords: electromyography, EMG normalization, functional EMG, older adults
Procedia PDF Downloads 9614690 Propane Dehydrogenation with Better Stability by a Modified Pt-Based Catalyst
Authors: Napat Hataivichian
Abstract:
The effect of transition metal doping on Pt/Al2O3 catalyst used in propane dehydrogenation reaction at 500˚C was studied. The preparation methods investigated were sequential impregnation (Pt followed by the 2nd metal or the 2nd metal followed by Pt) and co-impregnation. The metal contents of these catalysts were fixed as the weight ratio of Pt per the 2nd metal of around 0.075. These catalysts were characterized by N2-physisorption, TPR, CO-chemisorption and NH3-TPD. It was found that the impregnated 2nd metal had an effect upon reducibility of Pt due to its interaction with transition metal-containing structure. This was in agreement with the CO-chemisorption result that the presence of Pt metal, which is a result from Pt species reduction, was decreased. The total acidity of bimetallic catalysts is decreased but the strong acidity is slightly increased. It was found that the stability of bimetallic catalysts prepared by co-impregnation and sequential impregnation where the 2nd metal was impregnated before Pt were better than that of monometallic catalyst (undoped Pt one) due to the forming of Pt sites located on the transition metal-oxide modified surface. Among all preparation methods, the sequential impregnation method- having Pt impregnated before the 2nd metal gave the worst stability because this catalyst lacked the modified Pt sites and some fraction of Pt sites was covered by the 2nd metal.Keywords: alumina, dehydrogenation, platinum, transition metal
Procedia PDF Downloads 31714689 A Two Stage Stochastic Mathematical Model for the Tramp Ship Routing with Time Windows Problem
Authors: Amin Jamili
Abstract:
Nowadays, the majority of international trade in goods is carried by sea, and especially by ships deployed in the industrial and tramp segments. This paper addresses routing the tramp ships and determining the schedules including the arrival times to the ports, berthing times at the ports, and the departure times in an operational planning level. In the operational planning level, the weather can be almost exactly forecasted, however in some routes some uncertainties may remain. In this paper, the voyaging times between some of the ports are considered to be uncertain. To that end, a two-stage stochastic mathematical model is proposed. Moreover, a case study is tested with the presented model. The computational results show that this mathematical model is promising and can represent acceptable solutions.Keywords: routing, scheduling, tram ships, two stage stochastic model, uncertainty
Procedia PDF Downloads 44114688 Manodharmam: A Scientific Methodology for Improvisation and Cognition in Carnatic Music
Authors: Raghavi Janaswamy, Saraswathi K. Vasudev
Abstract:
Music is ubiquitous in human lives. Ever since the fetus hears the sound inside the mother’s womb and later upon birth, the baby experiences alluring sounds, the curiosity of learning emanates and evokes exploration. Music is an education than mere entertainment. The intricate balance between music, education, and entertainment has well been recognized by the scientific community and is being explored as a viable tool to understand and improve human cognition. There are seven basic swaras (notes) Sa, Ri, Ga, Ma, Pa, Da, and Ni in the Carnatic music system that are analogous to C, D, E, F, G, A, and B of the western system. The Carnatic music builds on the conscious use of microtones, gamakams (oscillation), and rendering styles that evolved over centuries and established its stance. The complex but erudite raga system has been designed with elaborate experiments on srutis (musical sounds) and human perception abilities. In parallel, ‘rasa’- the emotions evoked by certain srutis and hence the ragas been solidified along with the power of language in combination with the musical sounds. The Carnatic music branches out as Kalpita sangeetam (pre-composed music) and Manodharma sangeetam (improvised music). This article explores the Manodharma sangeetam and its subdivisions such as raga alapana, swara kalpana, neraval, and ragam-tanam-pallavi (RTP). The intrinsic mathematical strategies in it’s practice methods toward improvising the music have been explored in detail with concert examples. The techniques on swara weaving for swara kalpana rendering and methods on the alapana development are also discussed at length with an emphasis on the impact on the human cognitive abilities. The articulation of the outlined conscious practice methods not only helps to leave a long-lasting melodic impression on the listeners but also onsets cognitive developments.Keywords: Carnatic, Manodharmam, music cognition, Alapana
Procedia PDF Downloads 20714687 Doing Cause-and-Effect Analysis Using an Innovative Chat-Based Focus Group Method
Authors: Timothy Whitehill
Abstract:
This paper presents an innovative chat-based focus group method for collecting qualitative data to construct a cause-and-effect analysis in business research. This method was developed in response to the research and data collection challenges faced by the Covid-19 outbreak in the United Kingdom during 2020-21. This paper discusses the methodological approaches and builds a contemporary argument for its effectiveness in exploring cause-and-effect relationships in the context of focus group research, systems thinking and problem structuring methods. The pilot for this method was conducted between October 2020 and March 2021 and collected more than 7,000 words of chat-based data which was used to construct a consensus drawn cause-and-effect analysis. This method was developed in support of an ongoing Doctorate in Business Administration (DBA) thesis, which is using Design Science Research methodology to operationalize organisational resilience in UK construction sector firms.Keywords: cause-and-effect analysis, focus group research, problem structuring methods, qualitative research, systems thinking
Procedia PDF Downloads 22614686 Retrofitting of Bridge Piers against the Scour Damages: Case Study of the Marand-Soofian Route Bridge
Authors: Shatirah Akib, Hossein Basser, Hojat Karami, Afshin Jahangirzadeh
Abstract:
Bridge piers which are constructed in the track of high water rivers cause some variations in the flow patterns. This variation mostly is a result of the changes in river sections. Decreasing the river section, bridge piers significantly impress the flow patterns. Once the flow approaches the piers, the stream lines change their order, causing the appearance of different flow patterns around the bridge piers. New flow patterns are created following the geometry and the other technical characteristics of the piers. One of the most significant consequences of this event is the scour generated around the bridge piers which threatens the safety of the structure. In order to determine the properties of scour holes, to find maximum depth of the scour is an important factor. In this manuscript a numerical simulation of the scour around Marand-Soofian route bridge piers has been carried out via SSIIM 2.0 Software and the amount of maximum scour has been achieved subsequently. Eventually the methods for retrofitting of bridge piers against scours and also the methods for decreasing the amount of scour have been offered.Keywords: scour, bridge pier, numerical simulation, SSIIM 2.0
Procedia PDF Downloads 47814685 Determination of Verapamil Hydrochloride in the Tablet and Injection Solution by the Verapamil-Sensitive Electrode and Possibilities of Application in Pharmaceutical Analysis
Authors: Faisal A. Salih, V. V. Egorov
Abstract:
Verapamil is a drug used in medicine for arrhythmia, angina, and hypertension as a calcium channel blocker. In this study, a Verapamil-selective electrode was prepared, and the concentrations of the components in the membrane were as follows: PVC (32.8 wt %), O-NPhOE (66.6 wt %), and KTPClPB (0.6 wt % or approximately 0.01 M). The inner solution containing verapamil hydrochloride 1 x 10⁻³ M was introduced, and the electrodes were conditioned overnight in 1 x 10⁻³ M verapamil hydrochloride solution in 1 x 10⁻³ M orthophosphoric acid. These studies have demonstrated that O-NPhOE and KTPClPB are the best plasticizers and ion exchangers, while both direct potentiometry and potentiometric titration methods can be used for the determination of verapamil hydrochloride in tablets and injection solutions. Normalized weights of verapamil per tablet (80.4±0.2, 80.7±0.2, 81.0±0.4 mg) were determined by direct potentiometry and potentiometric titration, respectively. Weights of verapamil per average tablet weight determined by the methods of direct potentiometry and potentiometric titration were" 80.4±0.2, 80.7±0.2 mg determined for the same set of tablets, respectively. The masses of verapamil in solutions for injection, determined by direct potentiometry for two ampoules from one set, were (5.00±0.015, 5.004±0.006) mg. In all cases, good reproducibility and excellent correspondence with the declared quantities were observed.Keywords: verapamil, potentiometry, ion-selective electrode, lipophilic physiologically active amines
Procedia PDF Downloads 9014684 An Exact Algorithm for Location–Transportation Problems in Humanitarian Relief
Authors: Chansiri Singhtaun
Abstract:
This paper proposes a mathematical model and examines the performance of an exact algorithm for a location–transportation problems in humanitarian relief. The model determines the number and location of distribution centers in a relief network, the amount of relief supplies to be stocked at each distribution center and the vehicles to take the supplies to meet the needs of disaster victims under capacity restriction, transportation and budgetary constraints. The computational experiments are conducted on the various sizes of problems that are generated. Branch and bound algorithm is applied for these problems. The results show that this algorithm can solve problem sizes of up to three candidate locations with five demand points and one candidate location with up to twenty demand points without premature termination.Keywords: disaster response, facility location, humanitarian relief, transportation
Procedia PDF Downloads 45314683 Influence of Intra-Yarn Permeability on Mesoscale Permeability of Plain Weave and 3D Fabrics
Authors: Debabrata Adhikari, Mikhail Matveev, Louise Brown, Andy Long, Jan Kočí
Abstract:
A good understanding of mesoscale permeability of complex architectures in fibrous porous preforms is of particular interest in order to achieve efficient and cost-effective resin impregnation of liquid composite molding (LCM). Fabrics used in structural reinforcements are typically woven or stitched. However, 3D fabric reinforcement is of particular interest because of the versatility in the weaving pattern with the binder yarn and in-plain yarn arrangements to manufacture thick composite parts, overcome the limitation in delamination, improve toughness etc. To predict the permeability based on the available pore spaces between the inter yarn spaces, unit cell-based computational fluid dynamics models have been using the Stokes Darcy model. Typically, the preform consists of an arrangement of yarns with spacing in the order of mm, wherein each yarn consists of thousands of filaments with spacing in the order of μm. The fluid flow during infusion exchanges the mass between the intra and inter yarn channels, meaning there is no dead-end of flow between the mesopore in the inter yarn space and the micropore in the yarn. Several studies have employed the Brinkman equation to take into account the flow through dual-scale porosity reinforcement to estimate their permeability. Furthermore, to reduce the computational effort of dual scale flow, scale separation criteria based on the ratio between yarn permeability to the yarn spacing was also proposed to quantify the dual scale and negligible micro-scale flow regime for the prediction of mesoscale permeability. In the present work, the key parameter to identify the influence of intra yarn permeability on the mesoscale permeability has been investigated with the systematic study of weft and warp yarn spacing on the plane weave as well as the position of binder yarn and number of in-plane yarn layers on 3D weave fabric. The permeability tensor has been estimated using an OpenFOAM-based model for the various weave pattern with idealized geometry of yarn implemented using open-source software TexGen. Additionally, scale separation criterion has been established based on the various configuration of yarn permeability for the 3D fabric with both the isotropic and anisotropic yarn from Gebart’s model. It was observed that the variation of mesoscale permeability Kxx within 30% when the isotropic porous yarn is considered for a 3D fabric with binder yarn. Furthermore, the permeability model developed in this study will be used for multi-objective optimizations of the preform mesoscale geometry in terms of yarn spacing, binder pattern, and a number of layers with an aim to obtain improved permeability and reduced void content during the LCM process.Keywords: permeability, 3D fabric, dual-scale flow, liquid composite molding
Procedia PDF Downloads 10014682 Mutual Authentication for Sensor-to-Sensor Communications in IoT Infrastructure
Authors: Shadi Janbabaei, Hossein Gharaee Garakani, Naser Mohammadzadeh
Abstract:
Internet of things is a new concept that its emergence has caused ubiquity of sensors in human life, so that at any time, all data are collected, processed and transmitted by these sensors. In order to establish a secure connection, the first challenge is authentication between sensors. However, this challenge also requires some features so that the authentication is done properly. Anonymity, untraceability, and being lightweight are among the issues that need to be considered. In this paper, we have evaluated the authentication protocols and have analyzed the security vulnerabilities found in them. Then an improved light weight authentication protocol for sensor-to-sensor communications is presented which uses the hash function and logical operators. The analysis of protocol shows that security requirements have been met and the protocol is resistant against various attacks. In the end, by decreasing the number of computational cost functions, it is argued that the protocol is lighter than before.Keywords: anonymity, authentication, Internet of Things, lightweight, un-traceability
Procedia PDF Downloads 29714681 Commercialization of Technologies, Productivity and Problems of Technological Audit in the Russian Economy
Authors: E. A. Tkachenko, E. M. Rogova, A. S. Osipenko
Abstract:
The problems of technological development for the Russian Federation take on special significance in the context of modernization of the production base. The complexity of the position of the Russian economy is that it cannot be attributed fully to developing ones. Russia is a strong industrial power that has gone through the processes of destructive de-industrialization in the conditions of changing its economic and political structure. The need to find ways for re-industrialization is not a unique task for the economies of industrially developed countries. Under the influence of production outsourcing for 20 years, the industrial potential of leading economies of the world was regressed against the backdrop of the ascent of China, a new industrial giant. Therefore, methods, tools, and techniques utilized for industrial renaissance in EU may be used to achieve a technological leap in the Russian Federation, especially since the temporary gap of 5-7 years makes it possible to analyze best practices and use those technological transfer tools that have shown the greatest efficiency. In this article, methods of technological transfer are analyzed, the role of technological audit is justified, and factors are analyzed that influence the successful process of commercialization of technologies.Keywords: technological transfer, productivity, technological audit, commercialization of technologies
Procedia PDF Downloads 21814680 Development the Potential of Parking Tax and Parking Retribution Revenues: Case Study in Bekasi City
Authors: Ivan Yudianto
Abstract:
The research objectives are to analyze the factors that impede the Parking Tax and Parking Retribution collection in Bekasi City Government, analyzing the factors that can increase local own revenue from the tax sector of parking tax and parking retribution, analyze monitoring the parking retribution collection by the Bekasi City Government, analyze strategies Bekasi City Government through the preparation of a roadmap and action plan to increase parking tax and parking retribution revenues. The approach used in this research is a qualitative approach. Qualitative research is used because the problem is not yet clear and the object to be studied will be holistic, complex, and dynamic, and the relationship will be interactive symptoms. Methods of data collection and technical analysis of the data was in-depth interviews, participant observation, documentary materials, literature, and triangulation, as well as new methods such as the methods of visual materials and internet browsing. The results showed that there are several factors that become an obstacle such as the parking taxpayer does not disclose the actual parking revenue, the parking taxpayer are late or do not pay Parking Tax, many parking locations controlled by illegal organizations, shortage of human resources in charge levy and supervise the parking tax and parking retribution collection in the Bekasi City Government, surveillance parking tax and parking retribution are not scheduled on a regular basis. Several strategic priorities in order to develop the potential of the Parking Tax and Parking Retribution in the Bekasi City Government, namely through increased controling and monitoring of the Parking Taxpayer, forming a team of auditors to audit the Parking Taxpayer, seek law enforcement persuasive and educative to reduce Parking Taxpayer wayward, providing strict sanctions against the Parking Taxpayer disobedient, revised regulations mayors about locations of parking in Bekasi City, rationalize revenues target of Parking Retribution, conducting takeover attempts parking location on the roadside of the individual or specific group, and drafting regional regulations on parking subscribe.Keywords: local own revenue, parking retribution, parking tax, parking taxpayer
Procedia PDF Downloads 33014679 Numerical Investigation of Natural Convection of Pine, Olive and Orange Leaves
Authors: Ali Reza Tahavvor, Saeed Hosseini, Nazli Jowkar, Behnam Amiri
Abstract:
Heat transfer of leaves is a crucial factor in optimal operation of metabolic functions in plants. In order to quantify this phenomenon in different leaves and investigate the influence of leaf shape on heat transfer, natural convection for pine, orange and olive leaves was simulated as representatives of different groups of leaf shapes. CFD techniques were used in this simulation with the purpose to calculate heat transfer of leaves in similar environmental conditions. The problem was simulated for steady state and three-dimensional conditions. From obtained results, it was concluded that heat fluxes of all three different leaves are almost identical, however, total rate of heat transfer have highest and lowest values for orange leaves and pine leaves, respectively.Keywords: computational fluid dynamic, heat flux, heat transfer, natural convection
Procedia PDF Downloads 36714678 Application of Artificial Neural Network Technique for Diagnosing Asthma
Authors: Azadeh Bashiri
Abstract:
Introduction: Lack of proper diagnosis and inadequate treatment of asthma leads to physical and financial complications. This study aimed to use data mining techniques and creating a neural network intelligent system for diagnosis of asthma. Methods: The study population is the patients who had visited one of the Lung Clinics in Tehran. Data were analyzed using the SPSS statistical tool and the chi-square Pearson's coefficient was the basis of decision making for data ranking. The considered neural network is trained using back propagation learning technique. Results: According to the analysis performed by means of SPSS to select the top factors, 13 effective factors were selected, in different performances, data was mixed in various forms, so the different models were made for training the data and testing networks and in all different modes, the network was able to predict correctly 100% of all cases. Conclusion: Using data mining methods before the design structure of system, aimed to reduce the data dimension and the optimum choice of the data, will lead to a more accurate system. Therefore, considering the data mining approaches due to the nature of medical data is necessary.Keywords: asthma, data mining, Artificial Neural Network, intelligent system
Procedia PDF Downloads 27914677 A Numerical Simulation of Arterial Mass Transport in Presence of Magnetic Field-Links to Atherosclerosis
Authors: H. Aminfar, M. Mohammadpourfard, K. Khajeh
Abstract:
This paper has focused on the most important parameters in the LSC uptake; inlet Re number and Sc number in the presence of non-uniform magnetic field. The magnetic field is arising from the thin wire with electric current placed vertically to the arterial blood vessel. According to the results of this study, applying magnetic field can be a treatment for atherosclerosis by reducing LSC along the vessel wall. Homogeneous porous layer as a arterial wall has been regarded. Blood flow has been considered laminar and incompressible containing Ferro fluid (blood and 4 % vol. Fe₃O₄) under steady state conditions. Numerical solution of governing equations was obtained by using the single-phase model and control volume technique for flow field.Keywords: LDL surface concentration (LSC), magnetic field, computational fluid dynamics, porous wall
Procedia PDF Downloads 41114676 CE Method for Development of Japan's Stochastic Earthquake Catalogue
Authors: Babak Kamrani, Nozar Kishi
Abstract:
Stochastic catalog represents the events module of the earthquake loss estimation models. It includes series of events with different magnitudes and corresponding frequencies/probabilities. For the development of the stochastic catalog, random or uniform sampling methods are used to sample the events from the seismicity model. For covering all the Magnitude Frequency Distribution (MFD), a huge number of events should be generated for the above-mentioned methods. Characteristic Event (CE) method chooses the events based on the interest of the insurance industry. We divide the MFD of each source into bins. We have chosen the bins based on the probability of the interest by the insurance industry. First, we have collected the information for the available seismic sources. Sources are divided into Fault sources, subduction, and events without specific fault source. We have developed the MFD for each of the individual and areal source based on the seismicity of the sources. Afterward, we have calculated the CE magnitudes based on the desired probability. To develop the stochastic catalog, we have introduced uncertainty to the location of the events too.Keywords: stochastic catalogue, earthquake loss, uncertainty, characteristic event
Procedia PDF Downloads 30214675 Drone On-Time Obstacle Avoidance for Static and Dynamic Obstacles
Authors: Herath M. P. C. Jayaweera, Samer Hanoun
Abstract:
Path planning for on-time obstacle avoidance is an essential and challenging task that enables drones to achieve safe operation in any application domain. The level of challenge increases significantly on the obstacle avoidance technique when the drone is following a ground mobile entity (GME). This is mainly due to the change in direction and magnitude of the GME′s velocity in dynamic and unstructured environments. Force field techniques are the most widely used obstacle avoidance methods due to their simplicity, ease of use, and potential to be adopted for three-dimensional dynamic environments. However, the existing force field obstacle avoidance techniques suffer many drawbacks, including their tendency to generate longer routes when the obstacles are sideways of the drone′s route, poor ability to find the shortest flyable path, propensity to fall into local minima, producing a non-smooth path, and high failure rate in the presence of symmetrical obstacles. To overcome these shortcomings, this paper proposes an on-time three-dimensional obstacle avoidance method for drones to effectively and efficiently avoid dynamic and static obstacles in unknown environments while pursuing a GME. This on-time obstacle avoidance technique generates velocity waypoints for its obstacle-free and efficient path based on the shape of the encountered obstacles. This method can be utilized on most types of drones that have basic distance measurement sensors and autopilot-supported flight controllers. The proposed obstacle avoidance technique is validated and evaluated against existing force field methods for different simulation scenarios in Gazebo and ROS-supported PX4-SITL. The simulation results show that the proposed obstacle avoidance technique outperforms the existing force field techniques and is better suited for real-world applications.Keywords: drones, force field methods, obstacle avoidance, path planning
Procedia PDF Downloads 97