Search results for: ideal profile method
18668 Faults Diagnosis by Thresholding and Decision tree with Neuro-Fuzzy System
Authors: Y. Kourd, D. Lefebvre
Abstract:
The monitoring of industrial processes is required to ensure operating conditions of industrial systems through automatic detection and isolation of faults. This paper proposes a method of fault diagnosis based on a neuro-fuzzy hybrid structure. This hybrid structure combines the selection of threshold and decision tree. The validation of this method is obtained with the DAMADICS benchmark. In the first phase of the method, a model will be constructed that represents the normal state of the system to fault detection. Signatures of the faults are obtained with residuals analysis and selection of appropriate thresholds. These signatures provide groups of non-separable faults. In the second phase, we build faulty models to see the flaws in the system that cannot be isolated in the first phase. In the latest phase we construct the tree that isolates these faults.Keywords: decision tree, residuals analysis, ANFIS, fault diagnosis
Procedia PDF Downloads 63118667 Adopted Method of Information System Strategy for Knowledge Management System: A Literature Review
Authors: Elin Cahyaningsih, Dana Indra Sensuse, Wahyu Catur Wibowo, Sofiyanti Indriasari
Abstract:
Bureaucracy reform program drives Indonesian government to change their management and supporting unit in order to enhance their organization performance. Information technology as one of supporting unit became one of strategic plan that organization tried to improve, because IT can automate and speed up process, reduce business process life cycle become more effective and efficient. Knowledge management system is a technology application for supporting knowledge management implementation in government which is requirement based on problem and potential functionality of each knowledge management process. Define knowledge management that suitable for each organization it is difficult, that why we should make the knowledge management system strategy as an alignment of knowledge management process in the organization. Knowledge management system is one of information system development in people perspective, because this system has high dependency in human interaction and participation. Strategic plan for developing knowledge management system can be determine using some of information system strategic methods. This research conducted to define type of strategic method of information system, stage of activity each method, the strategic method strength and weakness. The author use literature review methods for identify and classify strategic methods of information system for differentiate method type, categorize common activities, strength and weakness. Result of this research are determine and compare six strategic information system methods, there are Balanced Scorecard, Five Force Porter, SWOT analysis, Value Chain Analysis, Risk Analysis and Gap Analysis. Balanced Scorecard and Risk Analysis believe as common strategic method that usually used and have the highest excellence strength.Keywords: knowledge management system, balanced scorecard, five force, risk analysis, gap analysis, value chain analysis, SWOT analysis
Procedia PDF Downloads 48418666 Real-Time Adaptive Obstacle Avoidance with DS Method and the Influence of Dynamic Environments Change on Different DS
Authors: Saeed Mahjoub Moghadas, Farhad Asadi, Shahed Torkamandi, Hassan Moradi, Mahmood Purgamshidian
Abstract:
In this paper, we present real-time obstacle avoidance approach for both autonomous and non-autonomous DS-based controllers and also based on dynamical systems (DS) method. In this approach, we can modulate the original dynamics of the controller and it allows us to determine safety margin and different types of DS to increase the robot’s reactiveness in the face of uncertainty in the localization of the obstacle and especially when robot moves very fast in changeable complex environments. The method is validated in simulation and influence of different autonomous and non-autonomous DS such as limit cycles, and unstable DS on this algorithm and also the position of different obstacles in complex environment is explained. Finally, we describe how the avoidance trajectories can be verified through different parameters such as safety factor.Keywords: limit cycles, nonlinear dynamical system, real time obstacle avoidance, DS-based controllers
Procedia PDF Downloads 39318665 An Image Enhancement Method Based on Curvelet Transform for CBCT-Images
Authors: Shahriar Farzam, Maryam Rastgarpour
Abstract:
Image denoising plays extremely important role in digital image processing. Enhancement of clinical image research based on Curvelet has been developed rapidly in recent years. In this paper, we present a method for image contrast enhancement for cone beam CT (CBCT) images based on fast discrete curvelet transforms (FDCT) that work through Unequally Spaced Fast Fourier Transform (USFFT). These transforms return a table of Curvelet transform coefficients indexed by a scale parameter, an orientation and a spatial location. Accordingly, the coefficients obtained from FDCT-USFFT can be modified in order to enhance contrast in an image. Our proposed method first uses a two-dimensional mathematical transform, namely the FDCT through unequal-space fast Fourier transform on input image and then applies thresholding on coefficients of Curvelet to enhance the CBCT images. Consequently, applying unequal-space fast Fourier Transform leads to an accurate reconstruction of the image with high resolution. The experimental results indicate the performance of the proposed method is superior to the existing ones in terms of Peak Signal to Noise Ratio (PSNR) and Effective Measure of Enhancement (EME).Keywords: curvelet transform, CBCT, image enhancement, image denoising
Procedia PDF Downloads 30318664 Assessment of the Soils Pollution Level of the Open Mine and Tailing Dump of Surrounding Territories of Akhtala Ore Processing Combine by Heavy Metals
Authors: K. A. Ghazaryan, T. H. Derdzyan
Abstract:
For assessment of the soils pollution level of the open mine and tailing dump of surrounding territories of Akhtala ore processing combine by heavy metals in 2013 collected soil samples and analyzed for different heavy metals, such as Cu, Zn, Pb, Ni and Cd. The main soil type in the study sites was the mountain cambisol. To classify soil pollution level contamination indices like Contamination factors (Cf), Degree of contamination (Cd), Pollution load index (PLI) and Geoaccumulation index (I-geo) are calculated. The distribution pattern of trace metals in the soil profile according to I geo, Cf and Cd values shows that the soil is very polluted. And also the PLI values for the 19 sites were >1, which indicates deterioration of site quality.Keywords: soils pollution, heavy metal, geoaccumulation index, pollution load index, contamination factor
Procedia PDF Downloads 44018663 The Impact of Artificial Intelligence on Spare Parts Technology
Authors: Amir Andria Gad Shehata
Abstract:
Minimizing the inventory cost, optimizing the inventory quantities, and increasing system operational availability are the main motivations to enhance forecasting demand of spare parts in a major power utility company in Medina. This paper reports in an effort made to optimize the orders quantities of spare parts by improving the method of forecasting the demand. The study focuses on equipment that has frequent spare parts purchase orders with uncertain demand. The pattern of the demand considers a lumpy pattern which makes conventional forecasting methods less effective. A comparison was made by benchmarking various methods of forecasting based on experts’ criteria to select the most suitable method for the case study. Three actual data sets were used to make the forecast in this case study. Two neural networks (NN) approaches were utilized and compared, namely long short-term memory (LSTM) and multilayer perceptron (MLP). The results as expected, showed that the NN models gave better results than traditional forecasting method (judgmental method). In addition, the LSTM model had a higher predictive accuracy than the MLP model.Keywords: spare part, spare part inventory, inventory model, optimization, maintenanceneural network, LSTM, MLP, forecasting demand, inventory management
Procedia PDF Downloads 6918662 Carbon Supported Cu and TiO2 Catalysts Applied for Ozone Decomposition
Authors: Katya Milenova, Penko Nikolov, Irina Stambolova, Plamen Nikolov, Vladimir Blaskov
Abstract:
In the recent article, a comparison was made between Cu and TiO2 supported catalysts on activated carbon for ozone decomposition reaction. The activated carbon support in the case of TiO2/AC sample was prepared by physicochemical pyrolysis and for Cu/AC samples the supports are chemically modified carbons. The prepared catalysts were synthesized by impregnation method. The samples were annealed in two different regimes-in air and under vacuum. To examine adsorption efficiency of the samples BET method was used. All investigated catalysts supported on chemically modified carbons have higher specific surface area compared to the specific surface area of TiO2 supported catalysts, varying in the range 590÷620 m2/g. The method of synthesis of the precursors had influenced catalytic activity.Keywords: activated carbon, adsorption, copper, ozone decomposition, TiO2
Procedia PDF Downloads 42218661 Classifying ERP Implementation’s Risks in Banking Sectors Based on Different Implementation Phases
Authors: Farnaz Farzadnia, Ahmad Alibabaei
Abstract:
Enterprise Resource Planning (ERP) systems are considered as complicated information systems. Many organizations failed implementing ERP systems because it is a very difficult, time-consuming and expensive process. Enterprise resource planning system is appropriate for organizations in all economic sectors. As banking is currently considered a non-typical area for ERP usage, there are very little studies on ERP implementation in banking. This paper presents a general risks taxonomy. In this research, after identifying implementation risks, a process quality management method has been applied to identify relations between risks of implementation ERP in banking sectors and implementation phases. Oracle application implementation method titled as AIM used in this research for classifying the risks. These findings will help managers to develop better strategies for supervising and controlling ERP implementation projects.Keywords: AIM implementation, bank, enterprise resource planning, risk, process quality management method
Procedia PDF Downloads 55018660 Thermo-Aeraulic Studies of a Multizone Building Influence of the Compactness Index
Authors: S. M. A. Bekkouche, T. Benouaz, M. K. Cherier, M. Hamdani, M. R. Yaiche, N. Benamrane
Abstract:
Most codes of building energy simulation neglect the humidity or well represent it with a very simplified method. It is for this reason that we have developed a new approach to the description and modeling of multizone buildings in Saharan climate. The thermal nodal method was used to apprehend thermoaeraulic behavior of air subjected to varied solicitations. In this contribution, analyzing the building geometry introduced the concept of index compactness as "quotient of external walls area and volume of the building". Physical phenomena that we have described in this paper, allow to build the model of the coupled thermoaeraulic behavior. The comparison shows that the found results are to some extent satisfactory. The result proves that temperature and specific humidity depending on compactness and geometric shape. Proper use of compactness index and building geometry parameters will noticeably minimize building energy.Keywords: multizone model, nodal method, compactness index, specific humidity, temperature
Procedia PDF Downloads 41418659 Pathway and Differential Gene Expression Studies for Colorectal Cancer
Authors: Ankita Shukla, Tiratha Raj Singh
Abstract:
Colorectal cancer (CRC) imposes serious mortality burden worldwide and it has been increasing for past consecutive years. Continuous efforts have been made so far to diagnose the disease condition and to identify the root cause for it. In this study, we performed the pathway level as well as the differential gene expression studies for CRC. We analyzed the gene expression profile GSE24514 from Gene Expression Omnibus (GEO) along with the gene pathways involved in the CRC. This analysis helps us to understand the behavior of the genes that have shown differential expression through their targeted pathways. Pathway analysis for the targeted genes covers the wider area which therefore decreases the possibility to miss the significant ones. This will prove to be beneficial to expose the ones that have not been given attention so far. Through this analysis, we attempt to understand the various neighboring genes that have close relationship to the targeted one and thus proved to be significantly controlling the CRC. It is anticipated that the identified hub and neighboring genes will provide new directions to look at the pathway level differently and will be crucial for the regulatory processes of the disease.Keywords: mismatch repair, microsatellite instability, carcinogenesis, morbidity
Procedia PDF Downloads 32318658 Application of Flow Cytometry for Detection of Influence of Abiotic Stress on Plants
Authors: Dace Grauda, Inta Belogrudova, Alexei Katashev, Linda Lancere, Isaak Rashal
Abstract:
The goal of study was the elaboration of easy applicable flow cytometry method for detection of influence of abiotic stress factors on plants, which could be useful for detection of environmental stresses in urban areas. The lime tree Tillia vulgaris H. is a popular tree species used for urban landscaping in Europe and is one of the main species of street greenery in Riga, Latvia. Tree decline and low vitality has observed in the central part of Riga. For this reason lime trees were select as a model object for the investigation. During the period of end of June and beginning of July 12 samples from different urban environment locations as well as plant material from a greenhouse were collected. BD FACSJazz® cell sorter (BD Biosciences, USA) with flow cytometer function was used to test viability of plant cells. The method was based on changes of relative fluorescence intensity of cells in blue laser (488 nm) after influence of stress factors. SpheroTM rainbow calibration particles (3.0–3.4 μm, BD Biosciences, USA) in phosphate buffered saline (PBS) were used for calibration of flow cytometer. BD PharmingenTM PBS (BD Biosciences, USA) was used for flow cytometry assays. The mean fluorescence intensity information from the purified cell suspension samples was recorded. Preliminary, multiple gate sizes and shapes were tested to find one with the lowest CV. It was found that low CV can be obtained if only the densest part of plant cells forward scatter/side scatter profile is analysed because in this case plant cells are most similar in size and shape. The young pollen cells in one nucleus stage were found as the best for detection of influence of abiotic stress. For experiments only fresh plant material was used– the buds of Tillia vulgaris with diameter 2 mm. For the cell suspension (in vitro culture) establishment modified protocol of microspore culture was applied. The cells were suspended in the MS (Murashige and Skoog) medium. For imitation of dust of urban area SiO2 nanoparticles with concentration 0.001 g/ml were dissolved in distilled water. Into 10 ml of cell suspension 1 ml of SiO2 nanoparticles suspension was added, then cells were incubated in speed shaking regime for 1 and 3 hours. As a stress factor the irradiation of cells for 20 min by UV was used (Hamamatsu light source L9566-02A, L10852 lamp, A10014-50-0110), maximum relative intensity (100%) at 365 nm and at ~310 nm (75%). Before UV irradiation the suspension of cells were placed onto a thin layer on a filter paper disk (diameter 45 mm) in a Petri dish with solid MS media. Cells without treatment were used as a control. Experiments were performed at room temperature (23-25 °C). Using flow cytometer BS FACS Software cells plot was created to determine the densest part, which was later gated using oval-shaped gate. Gate included from 95 to 99% of all cells. To determine relative fluorescence of cells logarithmic fluorescence scale in arbitrary fluorescence units were used. 3x103 gated cells were analysed from the each sample. The significant differences were found among relative fluorescence of cells from different trees after treatment with SiO2 nanoparticles and UV irradiation in comparison with the control.Keywords: flow cytometry, fluorescence, SiO2 nanoparticles, UV irradiation
Procedia PDF Downloads 41718657 Covariate-Adjusted Response-Adaptive Designs for Semi-Parametric Survival Responses
Authors: Ayon Mukherjee
Abstract:
Covariate-adjusted response-adaptive (CARA) designs use the available responses to skew the treatment allocation in a clinical trial in towards treatment found at an interim stage to be best for a given patient's covariate profile. Extensive research has been done on various aspects of CARA designs with the patient responses assumed to follow a parametric model. However, ranges of application for such designs are limited in real-life clinical trials where the responses infrequently fit a certain parametric form. On the other hand, robust estimates for the covariate-adjusted treatment effects are obtained from the parametric assumption. To balance these two requirements, designs are developed which are free from distributional assumptions about the survival responses, relying only on the assumption of proportional hazards for the two treatment arms. The proposed designs are developed by deriving two types of optimum allocation designs, and also by using a distribution function to link the past allocation, covariate and response histories to the present allocation. The optimal designs are based on biased coin procedures, with a bias towards the better treatment arm. These are the doubly-adaptive biased coin design (DBCD) and the efficient randomized adaptive design (ERADE). The treatment allocation proportions for these designs converge to the expected target values, which are functions of the Cox regression coefficients that are estimated sequentially. These expected target values are derived based on constrained optimization problems and are updated as information accrues with sequential arrival of patients. The design based on the link function is derived using the distribution function of a probit model whose parameters are adjusted based on the covariate profile of the incoming patient. To apply such designs, the treatment allocation probabilities are sequentially modified based on the treatment allocation history, response history, previous patients’ covariates and also the covariates of the incoming patient. Given these information, an expression is obtained for the conditional probability of a patient allocation to a treatment arm. Based on simulation studies, it is found that the ERADE is preferable to the DBCD when the main aim is to minimize the variance of the observed allocation proportion and to maximize the power of the Wald test for a treatment difference. However, the former procedure being discrete tends to be slower in converging towards the expected target allocation proportion. The link function based design achieves the highest skewness of patient allocation to the best treatment arm and thus ethically is the best design. Other comparative merits of the proposed designs have been highlighted and their preferred areas of application are discussed. It is concluded that the proposed CARA designs can be considered as suitable alternatives to the traditional balanced randomization designs in survival trials in terms of the power of the Wald test, provided that response data are available during the recruitment phase of the trial to enable adaptations to the designs. Moreover, the proposed designs enable more patients to get treated with the better treatment during the trial thus making the designs more ethically attractive to the patients. An existing clinical trial has been redesigned using these methods.Keywords: censored response, Cox regression, efficiency, ethics, optimal allocation, power, variability
Procedia PDF Downloads 16718656 Effective Editable Emoticon Description Schema for Mobile Applications
Authors: Jiwon Lee, Si-hwan Jang, Sanghyun Joo
Abstract:
The popularity of emoticons are on the rise since the mobile messengers are generalized. At the same time, few problems of emoticons are also occurred due to innate characteristics of emoticons. Too many emoticons make difficult people to select one which is well-suited for user's intention. On the contrary to this, sometimes user cannot find the emoticon which expresses user's exact intention. Poor information delivery of emoticon is another problem due to a major part of current emoticons are focused on emotion delivery. In this situation, we propose a new concept of emoticons, editable emoticons, to solve above drawbacks of emoticons. User can edit the components inside the proposed editable emoticon and send it to express his exact intention. By doing so, the number of editable emoticons can be maintained reasonable, and it can express user's exact intention. Further, editable emoticons can be used as information deliverer according to user's intention and editing skills. In this paper, we propose the concept of editable emoticons and schema based editable emoticon description method. The proposed description method is 200 times superior to the compared screen capturing method in the view of transmission bandwidth. Further, the description method is designed to have compatibility since it follows MPEG-UD international standard. The proposed editable emoticons can be exploited not only mobile applications, but also various fields such as education and medical field.Keywords: description schema, editable emoticon, emoticon transmission, mobile applications
Procedia PDF Downloads 30018655 Human Action Recognition Using Variational Bayesian HMM with Dirichlet Process Mixture of Gaussian Wishart Emission Model
Authors: Wanhyun Cho, Soonja Kang, Sangkyoon Kim, Soonyoung Park
Abstract:
In this paper, we present the human action recognition method using the variational Bayesian HMM with the Dirichlet process mixture (DPM) of the Gaussian-Wishart emission model (GWEM). First, we define the Bayesian HMM based on the Dirichlet process, which allows an infinite number of Gaussian-Wishart components to support continuous emission observations. Second, we have considered an efficient variational Bayesian inference method that can be applied to drive the posterior distribution of hidden variables and model parameters for the proposed model based on training data. And then we have derived the predictive distribution that may be used to classify new action. Third, the paper proposes a process of extracting appropriate spatial-temporal feature vectors that can be used to recognize a wide range of human behaviors from input video image. Finally, we have conducted experiments that can evaluate the performance of the proposed method. The experimental results show that the method presented is more efficient with human action recognition than existing methods.Keywords: human action recognition, Bayesian HMM, Dirichlet process mixture model, Gaussian-Wishart emission model, Variational Bayesian inference, prior distribution and approximate posterior distribution, KTH dataset
Procedia PDF Downloads 35918654 Inversion of Electrical Resistivity Data: A Review
Authors: Shrey Sharma, Gunjan Kumar Verma
Abstract:
High density electrical prospecting has been widely used in groundwater investigation, civil engineering and environmental survey. For efficient inversion, the forward modeling routine, sensitivity calculation, and inversion algorithm must be efficient. This paper attempts to provide a brief summary of the past and ongoing developments of the method. It includes reviews of the procedures used for data acquisition, processing and inversion of electrical resistivity data based on compilation of academic literature. In recent times there had been a significant evolution in field survey designs and data inversion techniques for the resistivity method. In general 2-D inversion for resistivity data is carried out using the linearized least-square method with the local optimization technique .Multi-electrode and multi-channel systems have made it possible to conduct large 2-D, 3-D and even 4-D surveys efficiently to resolve complex geological structures that were not possible with traditional 1-D surveys. 3-D surveys play an increasingly important role in very complex areas where 2-D models suffer from artifacts due to off-line structures. Continued developments in computation technology, as well as fast data inversion techniques and software, have made it possible to use optimization techniques to obtain model parameters to a higher accuracy. A brief discussion on the limitations of the electrical resistivity method has also been presented.Keywords: inversion, limitations, optimization, resistivity
Procedia PDF Downloads 36818653 Optimizing Emergency Rescue Center Layouts: A Backpropagation Neural Networks-Genetic Algorithms Method
Authors: Xiyang Li, Qi Yu, Lun Zhang
Abstract:
In the face of natural disasters and other emergency situations, determining the optimal location of rescue centers is crucial for improving rescue efficiency and minimizing impact on affected populations. This paper proposes a method that integrates genetic algorithms (GA) and backpropagation neural networks (BPNN) to address the site selection optimization problem for emergency rescue centers. We utilize BPNN to accurately estimate the cost of delivering supplies from rescue centers to each temporary camp. Moreover, a genetic algorithm with a special partially matched crossover (PMX) strategy is employed to ensure that the number of temporary camps assigned to each rescue center adheres to predetermined limits. Using the population distribution data during the 2022 epidemic in Jiading District, Shanghai, as an experimental case, this paper verifies the effectiveness of the proposed method. The experimental results demonstrate that the BPNN-GA method proposed in this study outperforms existing algorithms in terms of computational efficiency and optimization performance. Especially considering the requirements for computational resources and response time in emergency situations, the proposed method shows its ability to achieve rapid convergence and optimal performance in the early and mid-stages. Future research could explore incorporating more real-world conditions and variables into the model to further improve its accuracy and applicability.Keywords: emergency rescue centers, genetic algorithms, back-propagation neural networks, site selection optimization
Procedia PDF Downloads 9218652 Free Vibration and Buckling of Rectangular Plates under Nonuniform In-Plane Edge Shear Loads
Authors: T. H. Young, Y. J. Tsai
Abstract:
A method for determining the stress distribution of a rectangular plate subjected to two pairs of arbitrarily distributed in-plane edge shear loads is proposed, and the free vibration and buckling of such a rectangular plate are investigated in this work. The method utilizes two stress functions to synthesize the stress-resultant field of the plate with each of the stress functions satisfying the biharmonic compatibility equation. The sum of stress-resultant fields due to these two stress functions satisfies the boundary conditions at the edges of the plate, from which these two stress functions are determined. Then, the free vibration and buckling of the rectangular plate are investigated by the Galerkin method. Numerical results obtained by this work are compared with those appeared in the literature, and good agreements are observed.Keywords: stress analysis, free vibration, plate buckling, nonuniform in-plane edge shear
Procedia PDF Downloads 16018651 Elasto-Plastic Analysis of Structures Using Adaptive Gaussian Springs Based Applied Element Method
Authors: Mai Abdul Latif, Yuntian Feng
Abstract:
Applied Element Method (AEM) is a method that was developed to aid in the analysis of the collapse of structures. Current available methods cannot deal with structural collapse accurately; however, AEM can simulate the behavior of a structure from an initial state of no loading until collapse of the structure. The elements in AEM are connected with sets of normal and shear springs along the edges of the elements, that represent the stresses and strains of the element in that region. The elements are rigid, and the material properties are introduced through the spring stiffness. Nonlinear dynamic analysis has been widely modelled using the finite element method for analysis of progressive collapse of structures; however, difficulties in the analysis were found at the presence of excessively deformed elements with cracking or crushing, as well as having a high computational cost, and difficulties on choosing the appropriate material models for analysis. The Applied Element method is developed and coded to significantly improve the accuracy and also reduce the computational costs of the method. The scheme works for both linear elastic, and nonlinear cases, including elasto-plastic materials. This paper will focus on elastic and elasto-plastic material behaviour, where the number of springs required for an accurate analysis is tested. A steel cantilever beam is used as the structural element for the analysis. The first modification of the method is based on the Gaussian Quadrature to distribute the springs. Usually, the springs are equally distributed along the face of the element, but it was found that using Gaussian springs, only up to 2 springs were required for perfectly elastic cases, while with equal springs at least 5 springs were required. The method runs on a Newton-Raphson iteration scheme, and quadratic convergence was obtained. The second modification is based on adapting the number of springs required depending on the elasticity of the material. After the first Newton Raphson iteration, Von Mises stress conditions were used to calculate the stresses in the springs, and the springs are classified as elastic or plastic. Then transition springs, springs located exactly between the elastic and plastic region, are interpolated between regions to strictly identify the elastic and plastic regions in the cross section. Since a rectangular cross-section was analyzed, there were two plastic regions (top and bottom), and one elastic region (middle). The results of the present study show that elasto-plastic cases require only 2 springs for the elastic region, and 2 springs for the plastic region. This showed to improve the computational cost, reducing the minimum number of springs in elasto-plastic cases to only 6 springs. All the work is done using MATLAB and the results will be compared to models of structural elements using the finite element method in ANSYS.Keywords: applied element method, elasto-plastic, Gaussian springs, nonlinear
Procedia PDF Downloads 22618650 Geophysical Exploration of Aquifer Zones by (Ves) Method at Ayma-Kharagpur, District Paschim Midnapore, West Bengal
Authors: Mayank Sharma
Abstract:
Groundwater has been a matter of great concern in the past years due to the depletion in the water table. This has resulted from the over-exploitation of groundwater resources. Sub-surface exploration of groundwater is a great way to identify the groundwater potential of an area. Thus, in order to meet the water needs for irrigation in the study area, there was a need for a tube well to be installed. Therefore, a Geophysical investigation was carried out to find the most suitable point of drilling and sinking of tube well that encounters an aquifer. Hence, an electrical resistivity survey of geophysical exploration was used to know the aquifer zones of the area. The Vertical Electrical Sounding (VES) method was employed to know the subsurface geology of the area. Seven vertical electrical soundings using Schlumberger electrode array were carried out, having the maximum AB electrode separation of 700m at selected points in Ayma, Kharagpur-1 block of Paschim Midnapore district, West Bengal. The VES was done using an IGIS DDR3 Resistivity meter up to an approximate depth of 160-180m. The data was interpreted, processed and analyzed. Based on all the interpretations using the direct method, the geology of the area at the points of sounding was interpreted. It was established that two deeper clay-sand sections exist in the area at a depth of 50-70m (having resistivity range of 40-60ohm-m) and 70-160m (having resistivity range of 25-35ohm-m). These aquifers will provide a high yield of water which would be sufficient for the desired irrigation in the study area.Keywords: VES method, Schlumberger method, electrical resistivity survey, geophysical exploration
Procedia PDF Downloads 20218649 A Randomized Active Controlled Clinical Trial to Assess Clinical Efficacy and Safety of Tapentadol Nasal Spray in Moderate to Severe Post-Surgical Pain
Authors: Kamal Tolani, Sandeep Kumar, Rohit Luthra, Ankit Dadhania, Krishnaprasad K., Ram Gupta, Deepa Joshi
Abstract:
Background: Post-operative analgesia remains a clinical challenge, with central and peripheral sensitization playing a pivotal role in treatment-related complications and impaired quality of life. Centrally acting opioids offer poor risk benefit profile with increased intensity of gastrointestinal or central side effects and slow onset of clinical analgesia. The objective of this study was to assess the clinical feasibility of induction and maintenance therapy with Tapentadol Nasal Spray (NS) in moderate to severe acute post-operative pain. Methods: Phase III, randomized, active-controlled, non-inferiority clinical trial involving 294 cases who had undergone surgical procedures under general anesthesia or regional anesthesia. Post-surgery patients were randomized to receive either Tapentadol NS 45 mg or Tramadol 100mg IV as a bolus and subsequent 50 mg or 100 mg dose over 2-3 minutes. The frequency of administration of NS was at every 4-6 hours. At the end of 24 hrs, patients in the tramadol group who had a pain intensity score of ≥4 were switched to oral tramadol immediate release 100mg capsule until the pain intensity score reduced to <4. All patients who had achieved pain intensity ≤ 4 were shifted to a lower dose of either Tapentadol NS 22.5 mg or oral Tramadol immediate release 50mg capsule. The statistical analysis plan was envisaged as a non-inferiority trial involving comparison with Tramadol for Pain intensity difference at 60 minutes (PID60min), Sum of Pain intensity difference at 60 minutes (SPID60min), and Physician Global Assessment at 24 hrs (PGA24 hrs). Results: The per-protocol analyses involved 255 hospitalized cases undergoing surgical procedures. The median age of patients was 38.0 years. For the primary efficacy variables, Tapentadol NS was non-inferior to Inj/Oral Tramadol in relief of moderate to severe post-operative pain. On the basis of SPID60min, no clinically significant difference was observed between Tapentadol NS and Tramadol IV (1.73±2.24 vs. 1.64± 1.92, -0.09 [95% CI, -0.43, 0.60]). In the co-primary endpoint PGA24hrs, Tapentadol NS was non–inferior to Tramadol IV (2.12 ± 0.707 vs. 2.02 ±0.704, - 0.11[95% CI, -0.07, 0.28). However, on further assessment at 48hr, 72 hrs, and 120hrs, clinically superior pain relief was observed with the Tapentadol NS formulation that was statistically significant (p <0.05) at each of the time intervals. Secondary efficacy measures, including the onset of clinical analgesia and TOTPAR, showed non-inferiority to Tramadol. The safety profile and need for rescue medication were also similar in both the groups during the treatment period. The most common concomitant medications were anti-bacterial (98.3%). Conclusion: Tapentadol NS is a clinically feasible option for improved compliance as induction and maintenance therapy while offering a sustained and persistent patient response that is clinically meaningful in post-surgical settings.Keywords: tapentadol nasal spray, acute pain, tramadol, post-operative pain
Procedia PDF Downloads 25518648 Towards a Framework for Evaluating Scientific Efficiency of World-Class Universities
Authors: Veljko Jeremic, Milica Kostic Stankovic, Aleksandar Markovic, Milan Martic
Abstract:
Evaluating the efficiency of decision making units has been frequently elaborated on in numerous publications. In this paper, the theoretical framework for a novel method of Distance Based Analysis (DBA) is presented. In addition, the method is performed on a sample of the ARWU’s top 54 Universities of the United States, the findings of which clearly demonstrate that the best ranked Universities are far from also being the most efficient.Keywords: evaluating efficiency, distance based analysis, ranking of universities, ARWU
Procedia PDF Downloads 29918647 Modeling of Leaks Effects on Transient Dispersed Bubbly Flow
Authors: Mohand Kessal, Rachid Boucetta, Mourad Tikobaini, Mohammed Zamoum
Abstract:
Leakage problem of two-component fluids flow is modeled for a transient one-dimensional homogeneous bubbly flow and developed by taking into account the effect of a leak located at the middle point of the pipeline. The corresponding three conservation equations are numerically resolved by an improved characteristic method. The obtained results are explained and commented in terms of physical impact on the flow parameters.Keywords: fluid transients, pipelines leaks, method of characteristics, leakage problem
Procedia PDF Downloads 48218646 Research on the Calculation Method of Smartization Rate of Concrete Structure Building Construction
Authors: Hongyu Ye, Hong Zhang, Minjie Sun, Hongfang Xu
Abstract:
In the context of China's promotion of smart construction and building industrialization, there is a need for evaluation standards for the development of building industrialization based on assembly-type construction. However, the evaluation of smart construction remains a challenge in the industry's development process. This paper addresses this issue by proposing a calculation and evaluation method for the smartization rate of concrete structure building construction. The study focuses on examining the factors of smart equipment application and their impact on costs throughout the process of smart construction design, production, transfer, and construction. Based on this analysis, the paper presents an evaluation method for the smartization rate based on components. Furthermore, it introduces calculation methods for assessing the smartization rate of buildings. The paper also suggests a rapid calculation method for determining the smartization rate using Building Information Modeling (BIM) and information expression technology. The proposed research provides a foundation for the swift calculation of the smartization rate based on BIM and information technology. Ultimately, it aims to promote the development of smart construction and the construction of high-quality buildings in China.Keywords: building industrialization, high quality building, smart construction, smartization rate, component
Procedia PDF Downloads 7718645 Curcumin-Loaded Pickering Emulsion Stabilized by pH-Induced Self-Aggregated Chitosan Particles for Encapsulating Bioactive Compounds for Food, Flavor/Fragrance, Cosmetics, and Medicine
Authors: Rizwan Ahmed Bhutto, Noor ul ain Hira Bhutto, Mingwei Wang, Shahid Iqbal, Jiang Yi
Abstract:
Curcumin, a natural polyphenolic compound, boasts numerous health benefits; however, its industrial applications are hindered by instabilities and poor solubility. Encapsulating curcumin in Pickering emulsion presents a promising strategy to enhance its bioavailability. Yet, the development of an efficient and straightforward method to fabricate a natural emulsifier for Pickering emulsion poses a significant challenge. Chitosan has garnered attention due to its non-toxicity and excellent emulsifying properties. This study aimed to prepare four distinct types of self-aggregated chitosan particles using a pH-responsive self-assembling approach. The properties of the aggregated particles were adjusted by pH, degree of deacetylation (DDA), and molecular weight (MW), thereby controlling surface charge, size (ranging from nano to micro and floc), and contact angle. Pickering emulsions were then formulated using these various aggregated particles. As MW and pH increased and DDA decreased, the networked structures of the aggregated particles formed, resulting in highly elastic gels that were more resistant to the breakdown of Pickering emulsion at ambient temperature. With elevated temperatures, the kinetic energy of the aggregated particles increased, disrupting hydrogen bonds and potentially transforming the systems from fluids to gels. The Pickering emulsion based on aggregated particles served as a carrier for curcumin encapsulation. It was observed that DDA and MW played crucial roles in regulating drug loading, encapsulation efficiency, and release profile. This research sheds light on selecting suitable chitosan for controlling the release of bioactive compounds in Pickering emulsions, considering factors such as adjustable rheological properties, microstructure, and macrostructure. Furthermore, this study introduces an environmentally friendly and cost-effective synthesis of pH-responsive aggregate particles without the need for high-pressure homogenizers. It underscores the potential of aggregate particles with various MWs and DDAs for encapsulating other bioactive compounds, offering valuable applications in industries including food, flavor/fragrance, cosmetics, and medicine.Keywords: chitosan, molecular weight, rheological properties, curcumin encapsulation
Procedia PDF Downloads 7118644 Alternative Method of Determining Seismic Loads on Buildings Without Response Spectrum Application
Authors: Razmik Atabekyan, V. Atabekyan
Abstract:
This article discusses a new alternative method for determination of seismic loads on buildings, based on resistance of structures to deformations of vibrations. The basic principles for determining seismic loads by spectral method were developed in 40… 50ies of the last century and further have been improved to pursuit true assessments of seismic effects. The base of the existing methods to determine seismic loads is response spectrum or dynamicity coefficient β (norms of RF), which are not definitively established. To this day there is no single, universal method for the determination of seismic loads and when trying to apply the norms of different countries, significant discrepancies between the results are obtained. On the other hand there is a contradiction of the results of macro seismic surveys of strong earthquakes with the principle of the calculation based on accelerations. It is well-known, on soft soils there is an increase of destructions (mainly due to large displacements), even though the accelerations decreases. Obviously, the seismic impacts are transmitted to the building through foundation, but paradoxically, the existing methods do not even include foundation data. Meanwhile acceleration of foundation of the building can differ several times from the acceleration of the ground. During earthquakes each building has its own peculiarities of behavior, depending on the interaction between the soil and the foundations, their dynamic characteristics and many other factors. In this paper we consider a new, alternative method of determining the seismic loads on buildings, without the use of response spectrum. The following main conclusions: 1) Seismic loads are revealed at the foundation level, which leads to redistribution and reduction of seismic loads on structures. 2) The proposed method is universal and allows determine the seismic loads without the use of response spectrum and any implicit coefficients. 3) The possibility of taking into account important factors such as the strength characteristics of the soils, the size of the foundation, the angle of incidence of the seismic ray and others. 4) Existing methods can adequately determine the seismic loads on buildings only for first form of vibrations, at an average soil conditions.Keywords: seismic loads, response spectrum, dynamic characteristics of buildings, momentum
Procedia PDF Downloads 50718643 A Sufficient Fuzzy Controller for Improving the Transient Response in Electric Motors
Authors: Aliasghar Baziar, Hassan Masoumi, Alireza Ale Saadi
Abstract:
The control of the response of electric motors plays a significant role in the damping of transient responses. In this regard, this paper presents a static VAR compensator (SVC) based on a fuzzy logic which is applied to an industrial power network consisting of three phase synchronous, asynchronous and DC motor loads. The speed and acceleration variations of a specific machine are the inputs of the proposed fuzzy logic controller (FLC). In order to verify the effectiveness and proficiency of the proposed Fuzzy Logic based SVC (FLSVC), several non-linear time-domain digital simulation tests are performed. The proposed fuzzy model can properly control the response of electric motors. The results show that the FLSVC is successful to improve the voltage profile significantly over a wide range of operating conditions and disturbances thus improving the overall dynamic performance of the network.Keywords: fuzzy logic controller, VAR compensator, single cage asynchronous motor, DC motor
Procedia PDF Downloads 63218642 Using Non-Negative Matrix Factorization Based on Satellite Imagery for the Collection of Agricultural Statistics
Authors: Benyelles Zakaria, Yousfi Djaafar, Karoui Moussa Sofiane
Abstract:
Agriculture is fundamental and remains an important objective in the Algerian economy, based on traditional techniques and structures, it generally has a purpose of consumption. Collection of agricultural statistics in Algeria is done using traditional methods, which consists of investigating the use of land through survey and field survey. These statistics suffer from problems such as poor data quality, the long delay between collection of their last final availability and high cost compared to their limited use. The objective of this work is to develop a processing chain for a reliable inventory of agricultural land by trying to develop and implement a new method of extracting information. Indeed, this methodology allowed us to combine data from remote sensing and field data to collect statistics on areas of different land. The contribution of remote sensing in the improvement of agricultural statistics, in terms of area, has been studied in the wilaya of Sidi Bel Abbes. It is in this context that we applied a method for extracting information from satellite images. This method is called the non-negative matrix factorization, which does not consider the pixel as a single entity, but will look for components the pixel itself. The results obtained by the application of the MNF were compared with field data and the results obtained by the method of maximum likelihood. We have seen a rapprochement between the most important results of the FMN and those of field data. We believe that this method of extracting information from satellite data leads to interesting results of different types of land uses.Keywords: blind source separation, hyper-spectral image, non-negative matrix factorization, remote sensing
Procedia PDF Downloads 42618641 An In-Depth Analysis of the Implementation of 'I SMILE Happy Classroom' to Achieve Ideological and Political Integration
Authors: Jinhuang Zhang
Abstract:
This study focuses on traditional English courses in the context of globalization. The basic methodology involves the application of the "I SMILE Happy Classroom" teaching approach. The major findings reveal that compared to traditional courses with issues such as lack of ideological and political integration, this method successfully incorporates ideological and political elements in teaching content. It transforms the classroom into a student-centered, interactive, engaging, and responsive environment that highly integrates ideological and political integration. In conclusion, the "I SMILE Happy Classroom" teaching method shows great potential in addressing the pain points of traditional English courses and enhancing the quality and effectiveness of English teaching in terms of ideological and political integration.Keywords: English course, ideological and political elements, "I SMILE Happy Classroom" teaching method, teaching pain points
Procedia PDF Downloads 1018640 A Machining Method of Cross-Shape Nano Channel and Experiments for Silicon Substrate
Authors: Zone-Ching Lin, Hao-Yuan Jheng, Zih-Wun Jhang
Abstract:
The paper innovatively proposes using the concept of specific down force energy (SDFE) and AFM machine to establish a machining method of cross-shape nanochannel on single-crystal silicon substrate. As for machining a cross-shape nanochannel by AFM machine, the paper develop a method of machining cross-shape nanochannel groove at a fixed down force by using SDFE theory and combining the planned cutting path of cross-shape nanochannel up to 5th machining layer it finally achieves a cross-shape nanochannel at a cutting depth of around 20nm. Since there may be standing burr at the machined cross-shape nanochannel edge, the paper uses a smaller down force to cut the edge of the cross-shape nanochannel in order to lower the height of standing burr and converge the height of standing burr at the edge to below 0.54nm as set by the paper. Finally, the paper conducts experiments of machining cross-shape nanochannel groove on single-crystal silicon by AFM probe, and compares the simulation and experimental results. It is proved that this proposed machining method of cross-shape nanochannel is feasible.Keywords: atomic force microscopy (AFM), cross-shape nanochannel, silicon substrate, specific down force energy (SDFE)
Procedia PDF Downloads 37618639 Subarray Based Multiuser Massive MIMO Design Adopting Large Transmit and Receive Arrays
Authors: Tetsiki Taniguchi, Yoshio Karasawa
Abstract:
This paper describes a subarray based low computational design method of multiuser massive multiple input multiple output (MIMO) system. In our previous works, use of large array is assumed only in transmitter, but this study considers the case both of transmitter and receiver sides are equipped with large array antennas. For this aim, receive arrays are also divided into several subarrays, and the former proposed method is modified for the synthesis of a large array from subarrays in both ends. Through computer simulations, it is verified that the performance of the proposed method is degraded compared with the original approach, but it can achieve the improvement in the aspect of complexity, namely, significant reduction of the computational load to the practical level.Keywords: large array, massive multiple input multiple output (MIMO), multiuser, singular value decomposition, subarray, zero forcing
Procedia PDF Downloads 404