Search results for: KBM method
17870 Determining the Octanol-Water Partition Coefficient for Armchair Polyhex BN Nanotubes Using Topological Indices
Authors: Esmat Mohammadinasab
Abstract:
The aim of this paper is to investigate theoretically and establish a predictive model for determination LogP of armchair polyhex BN nanotubes by using simple descriptors. The relationship between the octanol-water partition coefficient (LogP) and quantum chemical descriptors, electric moments, and topological indices of some armchair polyhex BN nanotubes with various lengths and fixed circumference are represented. Based on density functional theory (DFT) electric moments and physico-chemical properties of those nanotubes are calculated. The DFT method performed based on the Becke’s 3-parameter formulation with the Lee-Yang-Parr functional (B3LYP) method and 3-21G standard basis sets. For the first time, the relationship between partition coefficient and different properties of polyhex BN nanotubes is investigated.Keywords: topological indices, quantum descriptors, DFT method, nanotubes
Procedia PDF Downloads 33617869 Direct Phoenix Identification and Antimicrobial Susceptibility Testing from Positive Blood Culture Broths
Authors: Waad Al Saleemi, Badriya Al Adawi, Zaaima Al Jabri, Sahim Al Ghafri, Jalila Al Hadhramia
Abstract:
Objectives: Using standard lab methods, a positive blood culture requires a minimum of two days (two occasions of overnight incubation) to obtain a final identification (ID) and antimicrobial susceptibility results (AST) report. In this study, we aimed to evaluate the accuracy and precision of identification and antimicrobial susceptibility testing of an alternative method (direct method) that will reduce the turnaround time by 24 hours. This method involves the direct inoculation of positive blood culture broths into the Phoenix system using serum separation tubes (SST). Method: This prospective study included monomicrobial-positive blood cultures obtained from January 2022 to May 2023 in SQUH. Blood cultures containing a mixture of organisms, fungi, or anaerobic organisms were excluded from this study. The result of the new “direct method” under study was compared with the current “standard method” used in the lab. The accuracy and precision were evaluated for the ID and AST using Clinical and Laboratory Standards Institute (CLSI) recommendations. The categorical agreement, essential agreement, and the rates of very major errors (VME), major errors (ME), and minor errors (MIE) for both gram-negative and gram-positive bacteria were calculated. Passing criteria were set according to CLSI. Result: The results of ID and AST were available for a total of 158 isolates. Of 77 isolates of gram-negative bacteria, 71 (92%) were correctly identified at the species level. Of 70 isolates of gram-positive bacteria, 47(67%) isolates were correctly identified. For gram-negative bacteria, the essential agreement of the direct method was ≥92% when compared to the standard method, while the categorical agreement was ≥91% for all tested antibiotics. The precision of ID and AST were noted to be 100% for all tested isolates. For gram-positive bacteria, the essential agreement was >93%, while the categorical agreement was >92% for all tested antibiotics except moxifloxacin. Many antibiotics were noted to have an unacceptable higher rate of very major errors including penicillin, cotrimoxazole, clindamycin, ciprofloxacin, and moxifloxacin. However, no error was observed in the results of vancomycin, linezolid, and daptomycin. Conclusion: The direct method of ID and AST for positive blood cultures using SST is reliable for gram negative bacteria. It will significantly decrease the turnaround time and will facilitate antimicrobial stewardship.Keywords: bloodstream infection, oman, direct ast, blood culture, rapid identification, antimicrobial susceptibility, phoenix, direct inoculation
Procedia PDF Downloads 6617868 Hydrological Method to Evaluate Environmental Flow: Case Study of Gharasou River, Ardabil
Authors: Mehdi Fuladipanah, Mehdi Jorabloo
Abstract:
Water flow management is one of the most important parts of river engineering. Non-uniformity distribution of rainfall and various flow demand with unreasonable flow management will be caused destroyed of the river ecosystem. Then, it is severe to determine ecosystem flow requirement. In this paper, Flow duration curve indices method which has hydrological based was used to evaluate environmental flow in Gharasou River, Ardabil, Iran. Using flow duration curve, Q90 and Q95 for different return periods were calculated. Their magnitude was determined as 1-day, 3-day, 7-day, and 30 days. According to the second method, hydraulic alteration indices often had low and medium range. To maintain river at an acceptable ecological condition, minimum daily discharge of index Q95 is 0.7 m^3.s^-1.Keywords: Gharasou River, water flow management, non-uniformity distribution, ecosystem flow requirement, hydraulic alteration
Procedia PDF Downloads 33617867 Trajectory Tracking Controller Based on Normalized Right Coprime Factorization Technique for the Ball and Plate System
Authors: Martins Olatunbosun Babatunde, Muhammed Bashir Muazu, Emmanuel Adewale Adedokun
Abstract:
This paper presents the development of a double-loop trajectory-tracking controller for the ball and plate system (BPS) using the Normalized Right Coprime Factorization (NRCF) scheme.The Linear Algebraic (LA) method is used to design the inner loop required to stabilize the ball, while H-infinity NRCF method, that involved the lead-lag compensator design approach, is used to develop the outer loop that controls the plate. Simulation results show that the plate was stabilized at 0.2989 seconds and the ball was able to settle after 0.9646 seconds, with a trajectory tracking error of 0.0036. This shows that the controller has good adaptability and robustness.Keywords: ball and plate system, normalized right coprime factorization, linear algebraic method, compensator, controller, tracking.
Procedia PDF Downloads 14217866 Detection and Classification of Myocardial Infarction Using New Extracted Features from Standard 12-Lead ECG Signals
Authors: Naser Safdarian, Nader Jafarnia Dabanloo
Abstract:
In this paper we used four features i.e. Q-wave integral, QRS complex integral, T-wave integral and total integral as extracted feature from normal and patient ECG signals to detection and localization of myocardial infarction (MI) in left ventricle of heart. In our research we focused on detection and localization of MI in standard ECG. We use the Q-wave integral and T-wave integral because this feature is important impression in detection of MI. We used some pattern recognition method such as Artificial Neural Network (ANN) to detect and localize the MI. Because these methods have good accuracy for classification of normal and abnormal signals. We used one type of Radial Basis Function (RBF) that called Probabilistic Neural Network (PNN) because of its nonlinearity property, and used other classifier such as k-Nearest Neighbors (KNN), Multilayer Perceptron (MLP) and Naive Bayes Classification. We used PhysioNet database as our training and test data. We reached over 80% for accuracy in test data for localization and over 95% for detection of MI. Main advantages of our method are simplicity and its good accuracy. Also we can improve accuracy of classification by adding more features in this method. A simple method based on using only four features which extracted from standard ECG is presented which has good accuracy in MI localization.Keywords: ECG signal processing, myocardial infarction, features extraction, pattern recognition
Procedia PDF Downloads 45617865 Faults Diagnosis by Thresholding and Decision tree with Neuro-Fuzzy System
Authors: Y. Kourd, D. Lefebvre
Abstract:
The monitoring of industrial processes is required to ensure operating conditions of industrial systems through automatic detection and isolation of faults. This paper proposes a method of fault diagnosis based on a neuro-fuzzy hybrid structure. This hybrid structure combines the selection of threshold and decision tree. The validation of this method is obtained with the DAMADICS benchmark. In the first phase of the method, a model will be constructed that represents the normal state of the system to fault detection. Signatures of the faults are obtained with residuals analysis and selection of appropriate thresholds. These signatures provide groups of non-separable faults. In the second phase, we build faulty models to see the flaws in the system that cannot be isolated in the first phase. In the latest phase we construct the tree that isolates these faults.Keywords: decision tree, residuals analysis, ANFIS, fault diagnosis
Procedia PDF Downloads 62717864 Adopted Method of Information System Strategy for Knowledge Management System: A Literature Review
Authors: Elin Cahyaningsih, Dana Indra Sensuse, Wahyu Catur Wibowo, Sofiyanti Indriasari
Abstract:
Bureaucracy reform program drives Indonesian government to change their management and supporting unit in order to enhance their organization performance. Information technology as one of supporting unit became one of strategic plan that organization tried to improve, because IT can automate and speed up process, reduce business process life cycle become more effective and efficient. Knowledge management system is a technology application for supporting knowledge management implementation in government which is requirement based on problem and potential functionality of each knowledge management process. Define knowledge management that suitable for each organization it is difficult, that why we should make the knowledge management system strategy as an alignment of knowledge management process in the organization. Knowledge management system is one of information system development in people perspective, because this system has high dependency in human interaction and participation. Strategic plan for developing knowledge management system can be determine using some of information system strategic methods. This research conducted to define type of strategic method of information system, stage of activity each method, the strategic method strength and weakness. The author use literature review methods for identify and classify strategic methods of information system for differentiate method type, categorize common activities, strength and weakness. Result of this research are determine and compare six strategic information system methods, there are Balanced Scorecard, Five Force Porter, SWOT analysis, Value Chain Analysis, Risk Analysis and Gap Analysis. Balanced Scorecard and Risk Analysis believe as common strategic method that usually used and have the highest excellence strength.Keywords: knowledge management system, balanced scorecard, five force, risk analysis, gap analysis, value chain analysis, SWOT analysis
Procedia PDF Downloads 48017863 Real-Time Adaptive Obstacle Avoidance with DS Method and the Influence of Dynamic Environments Change on Different DS
Authors: Saeed Mahjoub Moghadas, Farhad Asadi, Shahed Torkamandi, Hassan Moradi, Mahmood Purgamshidian
Abstract:
In this paper, we present real-time obstacle avoidance approach for both autonomous and non-autonomous DS-based controllers and also based on dynamical systems (DS) method. In this approach, we can modulate the original dynamics of the controller and it allows us to determine safety margin and different types of DS to increase the robot’s reactiveness in the face of uncertainty in the localization of the obstacle and especially when robot moves very fast in changeable complex environments. The method is validated in simulation and influence of different autonomous and non-autonomous DS such as limit cycles, and unstable DS on this algorithm and also the position of different obstacles in complex environment is explained. Finally, we describe how the avoidance trajectories can be verified through different parameters such as safety factor.Keywords: limit cycles, nonlinear dynamical system, real time obstacle avoidance, DS-based controllers
Procedia PDF Downloads 39017862 An Image Enhancement Method Based on Curvelet Transform for CBCT-Images
Authors: Shahriar Farzam, Maryam Rastgarpour
Abstract:
Image denoising plays extremely important role in digital image processing. Enhancement of clinical image research based on Curvelet has been developed rapidly in recent years. In this paper, we present a method for image contrast enhancement for cone beam CT (CBCT) images based on fast discrete curvelet transforms (FDCT) that work through Unequally Spaced Fast Fourier Transform (USFFT). These transforms return a table of Curvelet transform coefficients indexed by a scale parameter, an orientation and a spatial location. Accordingly, the coefficients obtained from FDCT-USFFT can be modified in order to enhance contrast in an image. Our proposed method first uses a two-dimensional mathematical transform, namely the FDCT through unequal-space fast Fourier transform on input image and then applies thresholding on coefficients of Curvelet to enhance the CBCT images. Consequently, applying unequal-space fast Fourier Transform leads to an accurate reconstruction of the image with high resolution. The experimental results indicate the performance of the proposed method is superior to the existing ones in terms of Peak Signal to Noise Ratio (PSNR) and Effective Measure of Enhancement (EME).Keywords: curvelet transform, CBCT, image enhancement, image denoising
Procedia PDF Downloads 30017861 The Impact of Artificial Intelligence on Spare Parts Technology
Authors: Amir Andria Gad Shehata
Abstract:
Minimizing the inventory cost, optimizing the inventory quantities, and increasing system operational availability are the main motivations to enhance forecasting demand of spare parts in a major power utility company in Medina. This paper reports in an effort made to optimize the orders quantities of spare parts by improving the method of forecasting the demand. The study focuses on equipment that has frequent spare parts purchase orders with uncertain demand. The pattern of the demand considers a lumpy pattern which makes conventional forecasting methods less effective. A comparison was made by benchmarking various methods of forecasting based on experts’ criteria to select the most suitable method for the case study. Three actual data sets were used to make the forecast in this case study. Two neural networks (NN) approaches were utilized and compared, namely long short-term memory (LSTM) and multilayer perceptron (MLP). The results as expected, showed that the NN models gave better results than traditional forecasting method (judgmental method). In addition, the LSTM model had a higher predictive accuracy than the MLP model.Keywords: spare part, spare part inventory, inventory model, optimization, maintenanceneural network, LSTM, MLP, forecasting demand, inventory management
Procedia PDF Downloads 6517860 Carbon Supported Cu and TiO2 Catalysts Applied for Ozone Decomposition
Authors: Katya Milenova, Penko Nikolov, Irina Stambolova, Plamen Nikolov, Vladimir Blaskov
Abstract:
In the recent article, a comparison was made between Cu and TiO2 supported catalysts on activated carbon for ozone decomposition reaction. The activated carbon support in the case of TiO2/AC sample was prepared by physicochemical pyrolysis and for Cu/AC samples the supports are chemically modified carbons. The prepared catalysts were synthesized by impregnation method. The samples were annealed in two different regimes-in air and under vacuum. To examine adsorption efficiency of the samples BET method was used. All investigated catalysts supported on chemically modified carbons have higher specific surface area compared to the specific surface area of TiO2 supported catalysts, varying in the range 590÷620 m2/g. The method of synthesis of the precursors had influenced catalytic activity.Keywords: activated carbon, adsorption, copper, ozone decomposition, TiO2
Procedia PDF Downloads 41817859 Classifying ERP Implementation’s Risks in Banking Sectors Based on Different Implementation Phases
Authors: Farnaz Farzadnia, Ahmad Alibabaei
Abstract:
Enterprise Resource Planning (ERP) systems are considered as complicated information systems. Many organizations failed implementing ERP systems because it is a very difficult, time-consuming and expensive process. Enterprise resource planning system is appropriate for organizations in all economic sectors. As banking is currently considered a non-typical area for ERP usage, there are very little studies on ERP implementation in banking. This paper presents a general risks taxonomy. In this research, after identifying implementation risks, a process quality management method has been applied to identify relations between risks of implementation ERP in banking sectors and implementation phases. Oracle application implementation method titled as AIM used in this research for classifying the risks. These findings will help managers to develop better strategies for supervising and controlling ERP implementation projects.Keywords: AIM implementation, bank, enterprise resource planning, risk, process quality management method
Procedia PDF Downloads 54617858 Thermo-Aeraulic Studies of a Multizone Building Influence of the Compactness Index
Authors: S. M. A. Bekkouche, T. Benouaz, M. K. Cherier, M. Hamdani, M. R. Yaiche, N. Benamrane
Abstract:
Most codes of building energy simulation neglect the humidity or well represent it with a very simplified method. It is for this reason that we have developed a new approach to the description and modeling of multizone buildings in Saharan climate. The thermal nodal method was used to apprehend thermoaeraulic behavior of air subjected to varied solicitations. In this contribution, analyzing the building geometry introduced the concept of index compactness as "quotient of external walls area and volume of the building". Physical phenomena that we have described in this paper, allow to build the model of the coupled thermoaeraulic behavior. The comparison shows that the found results are to some extent satisfactory. The result proves that temperature and specific humidity depending on compactness and geometric shape. Proper use of compactness index and building geometry parameters will noticeably minimize building energy.Keywords: multizone model, nodal method, compactness index, specific humidity, temperature
Procedia PDF Downloads 41017857 Effective Editable Emoticon Description Schema for Mobile Applications
Authors: Jiwon Lee, Si-hwan Jang, Sanghyun Joo
Abstract:
The popularity of emoticons are on the rise since the mobile messengers are generalized. At the same time, few problems of emoticons are also occurred due to innate characteristics of emoticons. Too many emoticons make difficult people to select one which is well-suited for user's intention. On the contrary to this, sometimes user cannot find the emoticon which expresses user's exact intention. Poor information delivery of emoticon is another problem due to a major part of current emoticons are focused on emotion delivery. In this situation, we propose a new concept of emoticons, editable emoticons, to solve above drawbacks of emoticons. User can edit the components inside the proposed editable emoticon and send it to express his exact intention. By doing so, the number of editable emoticons can be maintained reasonable, and it can express user's exact intention. Further, editable emoticons can be used as information deliverer according to user's intention and editing skills. In this paper, we propose the concept of editable emoticons and schema based editable emoticon description method. The proposed description method is 200 times superior to the compared screen capturing method in the view of transmission bandwidth. Further, the description method is designed to have compatibility since it follows MPEG-UD international standard. The proposed editable emoticons can be exploited not only mobile applications, but also various fields such as education and medical field.Keywords: description schema, editable emoticon, emoticon transmission, mobile applications
Procedia PDF Downloads 29717856 Human Action Recognition Using Variational Bayesian HMM with Dirichlet Process Mixture of Gaussian Wishart Emission Model
Authors: Wanhyun Cho, Soonja Kang, Sangkyoon Kim, Soonyoung Park
Abstract:
In this paper, we present the human action recognition method using the variational Bayesian HMM with the Dirichlet process mixture (DPM) of the Gaussian-Wishart emission model (GWEM). First, we define the Bayesian HMM based on the Dirichlet process, which allows an infinite number of Gaussian-Wishart components to support continuous emission observations. Second, we have considered an efficient variational Bayesian inference method that can be applied to drive the posterior distribution of hidden variables and model parameters for the proposed model based on training data. And then we have derived the predictive distribution that may be used to classify new action. Third, the paper proposes a process of extracting appropriate spatial-temporal feature vectors that can be used to recognize a wide range of human behaviors from input video image. Finally, we have conducted experiments that can evaluate the performance of the proposed method. The experimental results show that the method presented is more efficient with human action recognition than existing methods.Keywords: human action recognition, Bayesian HMM, Dirichlet process mixture model, Gaussian-Wishart emission model, Variational Bayesian inference, prior distribution and approximate posterior distribution, KTH dataset
Procedia PDF Downloads 35417855 Inversion of Electrical Resistivity Data: A Review
Authors: Shrey Sharma, Gunjan Kumar Verma
Abstract:
High density electrical prospecting has been widely used in groundwater investigation, civil engineering and environmental survey. For efficient inversion, the forward modeling routine, sensitivity calculation, and inversion algorithm must be efficient. This paper attempts to provide a brief summary of the past and ongoing developments of the method. It includes reviews of the procedures used for data acquisition, processing and inversion of electrical resistivity data based on compilation of academic literature. In recent times there had been a significant evolution in field survey designs and data inversion techniques for the resistivity method. In general 2-D inversion for resistivity data is carried out using the linearized least-square method with the local optimization technique .Multi-electrode and multi-channel systems have made it possible to conduct large 2-D, 3-D and even 4-D surveys efficiently to resolve complex geological structures that were not possible with traditional 1-D surveys. 3-D surveys play an increasingly important role in very complex areas where 2-D models suffer from artifacts due to off-line structures. Continued developments in computation technology, as well as fast data inversion techniques and software, have made it possible to use optimization techniques to obtain model parameters to a higher accuracy. A brief discussion on the limitations of the electrical resistivity method has also been presented.Keywords: inversion, limitations, optimization, resistivity
Procedia PDF Downloads 36617854 Optimizing Emergency Rescue Center Layouts: A Backpropagation Neural Networks-Genetic Algorithms Method
Authors: Xiyang Li, Qi Yu, Lun Zhang
Abstract:
In the face of natural disasters and other emergency situations, determining the optimal location of rescue centers is crucial for improving rescue efficiency and minimizing impact on affected populations. This paper proposes a method that integrates genetic algorithms (GA) and backpropagation neural networks (BPNN) to address the site selection optimization problem for emergency rescue centers. We utilize BPNN to accurately estimate the cost of delivering supplies from rescue centers to each temporary camp. Moreover, a genetic algorithm with a special partially matched crossover (PMX) strategy is employed to ensure that the number of temporary camps assigned to each rescue center adheres to predetermined limits. Using the population distribution data during the 2022 epidemic in Jiading District, Shanghai, as an experimental case, this paper verifies the effectiveness of the proposed method. The experimental results demonstrate that the BPNN-GA method proposed in this study outperforms existing algorithms in terms of computational efficiency and optimization performance. Especially considering the requirements for computational resources and response time in emergency situations, the proposed method shows its ability to achieve rapid convergence and optimal performance in the early and mid-stages. Future research could explore incorporating more real-world conditions and variables into the model to further improve its accuracy and applicability.Keywords: emergency rescue centers, genetic algorithms, back-propagation neural networks, site selection optimization
Procedia PDF Downloads 8817853 Free Vibration and Buckling of Rectangular Plates under Nonuniform In-Plane Edge Shear Loads
Authors: T. H. Young, Y. J. Tsai
Abstract:
A method for determining the stress distribution of a rectangular plate subjected to two pairs of arbitrarily distributed in-plane edge shear loads is proposed, and the free vibration and buckling of such a rectangular plate are investigated in this work. The method utilizes two stress functions to synthesize the stress-resultant field of the plate with each of the stress functions satisfying the biharmonic compatibility equation. The sum of stress-resultant fields due to these two stress functions satisfies the boundary conditions at the edges of the plate, from which these two stress functions are determined. Then, the free vibration and buckling of the rectangular plate are investigated by the Galerkin method. Numerical results obtained by this work are compared with those appeared in the literature, and good agreements are observed.Keywords: stress analysis, free vibration, plate buckling, nonuniform in-plane edge shear
Procedia PDF Downloads 15717852 Elasto-Plastic Analysis of Structures Using Adaptive Gaussian Springs Based Applied Element Method
Authors: Mai Abdul Latif, Yuntian Feng
Abstract:
Applied Element Method (AEM) is a method that was developed to aid in the analysis of the collapse of structures. Current available methods cannot deal with structural collapse accurately; however, AEM can simulate the behavior of a structure from an initial state of no loading until collapse of the structure. The elements in AEM are connected with sets of normal and shear springs along the edges of the elements, that represent the stresses and strains of the element in that region. The elements are rigid, and the material properties are introduced through the spring stiffness. Nonlinear dynamic analysis has been widely modelled using the finite element method for analysis of progressive collapse of structures; however, difficulties in the analysis were found at the presence of excessively deformed elements with cracking or crushing, as well as having a high computational cost, and difficulties on choosing the appropriate material models for analysis. The Applied Element method is developed and coded to significantly improve the accuracy and also reduce the computational costs of the method. The scheme works for both linear elastic, and nonlinear cases, including elasto-plastic materials. This paper will focus on elastic and elasto-plastic material behaviour, where the number of springs required for an accurate analysis is tested. A steel cantilever beam is used as the structural element for the analysis. The first modification of the method is based on the Gaussian Quadrature to distribute the springs. Usually, the springs are equally distributed along the face of the element, but it was found that using Gaussian springs, only up to 2 springs were required for perfectly elastic cases, while with equal springs at least 5 springs were required. The method runs on a Newton-Raphson iteration scheme, and quadratic convergence was obtained. The second modification is based on adapting the number of springs required depending on the elasticity of the material. After the first Newton Raphson iteration, Von Mises stress conditions were used to calculate the stresses in the springs, and the springs are classified as elastic or plastic. Then transition springs, springs located exactly between the elastic and plastic region, are interpolated between regions to strictly identify the elastic and plastic regions in the cross section. Since a rectangular cross-section was analyzed, there were two plastic regions (top and bottom), and one elastic region (middle). The results of the present study show that elasto-plastic cases require only 2 springs for the elastic region, and 2 springs for the plastic region. This showed to improve the computational cost, reducing the minimum number of springs in elasto-plastic cases to only 6 springs. All the work is done using MATLAB and the results will be compared to models of structural elements using the finite element method in ANSYS.Keywords: applied element method, elasto-plastic, Gaussian springs, nonlinear
Procedia PDF Downloads 22517851 Geophysical Exploration of Aquifer Zones by (Ves) Method at Ayma-Kharagpur, District Paschim Midnapore, West Bengal
Authors: Mayank Sharma
Abstract:
Groundwater has been a matter of great concern in the past years due to the depletion in the water table. This has resulted from the over-exploitation of groundwater resources. Sub-surface exploration of groundwater is a great way to identify the groundwater potential of an area. Thus, in order to meet the water needs for irrigation in the study area, there was a need for a tube well to be installed. Therefore, a Geophysical investigation was carried out to find the most suitable point of drilling and sinking of tube well that encounters an aquifer. Hence, an electrical resistivity survey of geophysical exploration was used to know the aquifer zones of the area. The Vertical Electrical Sounding (VES) method was employed to know the subsurface geology of the area. Seven vertical electrical soundings using Schlumberger electrode array were carried out, having the maximum AB electrode separation of 700m at selected points in Ayma, Kharagpur-1 block of Paschim Midnapore district, West Bengal. The VES was done using an IGIS DDR3 Resistivity meter up to an approximate depth of 160-180m. The data was interpreted, processed and analyzed. Based on all the interpretations using the direct method, the geology of the area at the points of sounding was interpreted. It was established that two deeper clay-sand sections exist in the area at a depth of 50-70m (having resistivity range of 40-60ohm-m) and 70-160m (having resistivity range of 25-35ohm-m). These aquifers will provide a high yield of water which would be sufficient for the desired irrigation in the study area.Keywords: VES method, Schlumberger method, electrical resistivity survey, geophysical exploration
Procedia PDF Downloads 19617850 Towards a Framework for Evaluating Scientific Efficiency of World-Class Universities
Authors: Veljko Jeremic, Milica Kostic Stankovic, Aleksandar Markovic, Milan Martic
Abstract:
Evaluating the efficiency of decision making units has been frequently elaborated on in numerous publications. In this paper, the theoretical framework for a novel method of Distance Based Analysis (DBA) is presented. In addition, the method is performed on a sample of the ARWU’s top 54 Universities of the United States, the findings of which clearly demonstrate that the best ranked Universities are far from also being the most efficient.Keywords: evaluating efficiency, distance based analysis, ranking of universities, ARWU
Procedia PDF Downloads 29617849 Modeling of Leaks Effects on Transient Dispersed Bubbly Flow
Authors: Mohand Kessal, Rachid Boucetta, Mourad Tikobaini, Mohammed Zamoum
Abstract:
Leakage problem of two-component fluids flow is modeled for a transient one-dimensional homogeneous bubbly flow and developed by taking into account the effect of a leak located at the middle point of the pipeline. The corresponding three conservation equations are numerically resolved by an improved characteristic method. The obtained results are explained and commented in terms of physical impact on the flow parameters.Keywords: fluid transients, pipelines leaks, method of characteristics, leakage problem
Procedia PDF Downloads 48017848 Research on the Calculation Method of Smartization Rate of Concrete Structure Building Construction
Authors: Hongyu Ye, Hong Zhang, Minjie Sun, Hongfang Xu
Abstract:
In the context of China's promotion of smart construction and building industrialization, there is a need for evaluation standards for the development of building industrialization based on assembly-type construction. However, the evaluation of smart construction remains a challenge in the industry's development process. This paper addresses this issue by proposing a calculation and evaluation method for the smartization rate of concrete structure building construction. The study focuses on examining the factors of smart equipment application and their impact on costs throughout the process of smart construction design, production, transfer, and construction. Based on this analysis, the paper presents an evaluation method for the smartization rate based on components. Furthermore, it introduces calculation methods for assessing the smartization rate of buildings. The paper also suggests a rapid calculation method for determining the smartization rate using Building Information Modeling (BIM) and information expression technology. The proposed research provides a foundation for the swift calculation of the smartization rate based on BIM and information technology. Ultimately, it aims to promote the development of smart construction and the construction of high-quality buildings in China.Keywords: building industrialization, high quality building, smart construction, smartization rate, component
Procedia PDF Downloads 7217847 Alternative Method of Determining Seismic Loads on Buildings Without Response Spectrum Application
Authors: Razmik Atabekyan, V. Atabekyan
Abstract:
This article discusses a new alternative method for determination of seismic loads on buildings, based on resistance of structures to deformations of vibrations. The basic principles for determining seismic loads by spectral method were developed in 40… 50ies of the last century and further have been improved to pursuit true assessments of seismic effects. The base of the existing methods to determine seismic loads is response spectrum or dynamicity coefficient β (norms of RF), which are not definitively established. To this day there is no single, universal method for the determination of seismic loads and when trying to apply the norms of different countries, significant discrepancies between the results are obtained. On the other hand there is a contradiction of the results of macro seismic surveys of strong earthquakes with the principle of the calculation based on accelerations. It is well-known, on soft soils there is an increase of destructions (mainly due to large displacements), even though the accelerations decreases. Obviously, the seismic impacts are transmitted to the building through foundation, but paradoxically, the existing methods do not even include foundation data. Meanwhile acceleration of foundation of the building can differ several times from the acceleration of the ground. During earthquakes each building has its own peculiarities of behavior, depending on the interaction between the soil and the foundations, their dynamic characteristics and many other factors. In this paper we consider a new, alternative method of determining the seismic loads on buildings, without the use of response spectrum. The following main conclusions: 1) Seismic loads are revealed at the foundation level, which leads to redistribution and reduction of seismic loads on structures. 2) The proposed method is universal and allows determine the seismic loads without the use of response spectrum and any implicit coefficients. 3) The possibility of taking into account important factors such as the strength characteristics of the soils, the size of the foundation, the angle of incidence of the seismic ray and others. 4) Existing methods can adequately determine the seismic loads on buildings only for first form of vibrations, at an average soil conditions.Keywords: seismic loads, response spectrum, dynamic characteristics of buildings, momentum
Procedia PDF Downloads 50517846 Using Non-Negative Matrix Factorization Based on Satellite Imagery for the Collection of Agricultural Statistics
Authors: Benyelles Zakaria, Yousfi Djaafar, Karoui Moussa Sofiane
Abstract:
Agriculture is fundamental and remains an important objective in the Algerian economy, based on traditional techniques and structures, it generally has a purpose of consumption. Collection of agricultural statistics in Algeria is done using traditional methods, which consists of investigating the use of land through survey and field survey. These statistics suffer from problems such as poor data quality, the long delay between collection of their last final availability and high cost compared to their limited use. The objective of this work is to develop a processing chain for a reliable inventory of agricultural land by trying to develop and implement a new method of extracting information. Indeed, this methodology allowed us to combine data from remote sensing and field data to collect statistics on areas of different land. The contribution of remote sensing in the improvement of agricultural statistics, in terms of area, has been studied in the wilaya of Sidi Bel Abbes. It is in this context that we applied a method for extracting information from satellite images. This method is called the non-negative matrix factorization, which does not consider the pixel as a single entity, but will look for components the pixel itself. The results obtained by the application of the MNF were compared with field data and the results obtained by the method of maximum likelihood. We have seen a rapprochement between the most important results of the FMN and those of field data. We believe that this method of extracting information from satellite data leads to interesting results of different types of land uses.Keywords: blind source separation, hyper-spectral image, non-negative matrix factorization, remote sensing
Procedia PDF Downloads 42317845 A Machining Method of Cross-Shape Nano Channel and Experiments for Silicon Substrate
Authors: Zone-Ching Lin, Hao-Yuan Jheng, Zih-Wun Jhang
Abstract:
The paper innovatively proposes using the concept of specific down force energy (SDFE) and AFM machine to establish a machining method of cross-shape nanochannel on single-crystal silicon substrate. As for machining a cross-shape nanochannel by AFM machine, the paper develop a method of machining cross-shape nanochannel groove at a fixed down force by using SDFE theory and combining the planned cutting path of cross-shape nanochannel up to 5th machining layer it finally achieves a cross-shape nanochannel at a cutting depth of around 20nm. Since there may be standing burr at the machined cross-shape nanochannel edge, the paper uses a smaller down force to cut the edge of the cross-shape nanochannel in order to lower the height of standing burr and converge the height of standing burr at the edge to below 0.54nm as set by the paper. Finally, the paper conducts experiments of machining cross-shape nanochannel groove on single-crystal silicon by AFM probe, and compares the simulation and experimental results. It is proved that this proposed machining method of cross-shape nanochannel is feasible.Keywords: atomic force microscopy (AFM), cross-shape nanochannel, silicon substrate, specific down force energy (SDFE)
Procedia PDF Downloads 37517844 Subarray Based Multiuser Massive MIMO Design Adopting Large Transmit and Receive Arrays
Authors: Tetsiki Taniguchi, Yoshio Karasawa
Abstract:
This paper describes a subarray based low computational design method of multiuser massive multiple input multiple output (MIMO) system. In our previous works, use of large array is assumed only in transmitter, but this study considers the case both of transmitter and receiver sides are equipped with large array antennas. For this aim, receive arrays are also divided into several subarrays, and the former proposed method is modified for the synthesis of a large array from subarrays in both ends. Through computer simulations, it is verified that the performance of the proposed method is degraded compared with the original approach, but it can achieve the improvement in the aspect of complexity, namely, significant reduction of the computational load to the practical level.Keywords: large array, massive multiple input multiple output (MIMO), multiuser, singular value decomposition, subarray, zero forcing
Procedia PDF Downloads 40217843 Multiscale Simulation of Ink Seepage into Fibrous Structures through a Mesoscopic Variational Model
Authors: Athmane Bakhta, Sebastien Leclaire, David Vidal, Francois Bertrand, Mohamed Cheriet
Abstract:
This work presents a new three-dimensional variational model proposed for the simulation of ink seepage into paper sheets at the fiber level. The model, inspired by the Hising model, takes into account a finite volume of ink and describes the system state through gravity, cohesion, and adhesion force interactions. At the mesoscopic scale, the paper substrate is modeled using a discretized fiber structure generated using a numerical deposition procedure. A modified Monte Carlo method is introduced for the simulation of the ink dynamics. Besides, a multiphase lattice Boltzmann method is suggested to fine-tune the mesoscopic variational model parameters, and it is shown that the ink seepage behaviors predicted by the proposed model can resemble those predicted by a method relying on first principles.Keywords: fibrous media, lattice Boltzmann, modelling and simulation, Monte Carlo, variational model
Procedia PDF Downloads 14717842 The Acquisition of Case in Biological Domain Based on Text Mining
Authors: Shen Jian, Hu Jie, Qi Jin, Liu Wei Jie, Chen Ji Yi, Peng Ying Hong
Abstract:
In order to settle the problem of acquiring case in biological related to design problems, a biometrics instance acquisition method based on text mining is presented. Through the construction of corpus text vector space and knowledge mining, the feature selection, similarity measure and case retrieval method of text in the field of biology are studied. First, we establish a vector space model of the corpus in the biological field and complete the preprocessing steps. Then, the corpus is retrieved by using the vector space model combined with the functional keywords to obtain the biological domain examples related to the design problems. Finally, we verify the validity of this method by taking the example of text.Keywords: text mining, vector space model, feature selection, biologically inspired design
Procedia PDF Downloads 26217841 An Earth Mover’s Distance Algorithm Based DDoS Detection Mechanism in SDN
Authors: Yang Zhou, Kangfeng Zheng, Wei Ni, Ren Ping Liu
Abstract:
Software-defined networking (SDN) provides a solution for scalable network framework with decoupled control and data plane. However, this architecture also induces a particular distributed denial-of-service (DDoS) attack that can affect or even overwhelm the SDN network. DDoS attack detection problem has to date been mostly researched as entropy comparison problem. However, this problem lacks the utilization of SDN, and the results are not accurate. In this paper, we propose a DDoS attack detection method, which interprets DDoS detection as a signature matching problem and is formulated as Earth Mover’s Distance (EMD) model. Considering the feasibility and accuracy, we further propose to define the cost function of EMD to be a generalized Kullback-Leibler divergence. Simulation results show that our proposed method can detect DDoS attacks by comparing EMD values with the ones computed in the case without attacks. Moreover, our method can significantly increase the true positive rate of detection.Keywords: DDoS detection, EMD, relative entropy, SDN
Procedia PDF Downloads 340