Search results for: POI extraction method
17335 Comparison of Analytical Method and Software for Analysis of Flat Slab Subjected to Various Parametric Loadings
Authors: Hema V. Vanar, R. K. Soni, N. D. Shah
Abstract:
Slabs supported directly on columns without beams are known as Flat slabs. Flat slabs are highly versatile elements widely used in construction, providing minimum depth, fast construction and allowing flexible column grids. The main objective of this thesis is comparison of analytical method and soft ware for analysis of flat slab subjected to various parametric loadings. Study presents analysis of flat slab is performed under different types of gravity.Keywords: fat slab, parametric load, analysis, software
Procedia PDF Downloads 49717334 Enhancing the Network Security with Gray Code
Authors: Thomas Adi Purnomo Sidhi
Abstract:
Nowadays, network is an essential need in almost every part of human daily activities. People now can seamlessly connect to others through the Internet. With advanced technology, our personal data now can be more easily accessed. One of many components we are concerned for delivering the best network is a security issue. This paper is proposing a method that provides more options for security. This research aims to improve network security by focusing on the physical layer which is the first layer of the OSI model. The layer consists of the basic networking hardware transmission technologies of a network. With the use of observation method, the research produces a schematic design for enhancing the network security through the gray code converter.Keywords: network, network security, grey code, physical layer
Procedia PDF Downloads 50817333 Parametric Approach for Reserve Liability Estimate in Mortgage Insurance
Authors: Rajinder Singh, Ram Valluru
Abstract:
Chain Ladder (CL) method, Expected Loss Ratio (ELR) method and Bornhuetter-Ferguson (BF) method, in addition to more complex transition-rate modeling, are commonly used actuarial reserving methods in general insurance. There is limited published research about their relative performance in the context of Mortgage Insurance (MI). In our experience, these traditional techniques pose unique challenges and do not provide stable claim estimates for medium to longer term liabilities. The relative strengths and weaknesses among various alternative approaches revolve around: stability in the recent loss development pattern, sufficiency and reliability of loss development data, and agreement/disagreement between reported losses to date and ultimate loss estimate. CL method results in volatile reserve estimates, especially for accident periods with little development experience. The ELR method breaks down especially when ultimate loss ratios are not stable and predictable. While the BF method provides a good tradeoff between the loss development approach (CL) and ELR, the approach generates claim development and ultimate reserves that are disconnected from the ever-to-date (ETD) development experience for some accident years that have more development experience. Further, BF is based on subjective a priori assumption. The fundamental shortcoming of these methods is their inability to model exogenous factors, like the economy, which impact various cohorts at the same chronological time but at staggered points along their life-time development. This paper proposes an alternative approach of parametrizing the loss development curve and using logistic regression to generate the ultimate loss estimate for each homogeneous group (accident year or delinquency period). The methodology was tested on an actual MI claim development dataset where various cohorts followed a sigmoidal trend, but levels varied substantially depending upon the economic and operational conditions during the development period spanning over many years. The proposed approach provides the ability to indirectly incorporate such exogenous factors and produce more stable loss forecasts for reserving purposes as compared to the traditional CL and BF methods.Keywords: actuarial loss reserving techniques, logistic regression, parametric function, volatility
Procedia PDF Downloads 13517332 Digital Retinal Images: Background and Damaged Areas Segmentation
Authors: Eman A. Gani, Loay E. George, Faisel G. Mohammed, Kamal H. Sager
Abstract:
Digital retinal images are more appropriate for automatic screening of diabetic retinopathy systems. Unfortunately, a significant percentage of these images are poor quality that hinders further analysis due to many factors (such as patient movement, inadequate or non-uniform illumination, acquisition angle and retinal pigmentation). The retinal images of poor quality need to be enhanced before the extraction of features and abnormalities. So, the segmentation of retinal image is essential for this purpose, the segmentation is employed to smooth and strengthen image by separating the background and damaged areas from the overall image thus resulting in retinal image enhancement and less processing time. In this paper, methods for segmenting colored retinal image are proposed to improve the quality of retinal image diagnosis. The methods generate two segmentation masks; i.e., background segmentation mask for extracting the background area and poor quality mask for removing the noisy areas from the retinal image. The standard retinal image databases DIARETDB0, DIARETDB1, STARE, DRIVE and some images obtained from ophthalmologists have been used to test the validation of the proposed segmentation technique. Experimental results indicate the introduced methods are effective and can lead to high segmentation accuracy.Keywords: retinal images, fundus images, diabetic retinopathy, background segmentation, damaged areas segmentation
Procedia PDF Downloads 40717331 STC Parameters versus Real Time Measured Parameters to Determine Cost Effectiveness of PV Panels
Authors: V. E. Selaule, R. M. Schoeman H. C. Z. Pienaar
Abstract:
Research has shown that solar energy is a renewable energy resource with the most potential when compared to other renewable energy resources in South Africa. There are many makes of Photovoltaic (PV) panels on the market and it is difficult to assess which to use. PV panel manufacturers use Standard Test Conditions (STC) to rate their PV panels. STC conditions are different from the actual operating environmental conditions were the PV panels are used. This paper describes a practical method to determine the most cost effective available PV panel. The method shows that PV panel manufacturer STC ratings cannot be used to select a cost effective PV panel.Keywords: PV orientation, PV panel, PV STC, Solar energy
Procedia PDF Downloads 47617330 Maintaining User-Level Security in Short Message Service
Authors: T. Arudchelvam, W. W. E. N. Fernando
Abstract:
Mobile phone has become as an essential thing in our life. Therefore, security is the most important thing to be considered in mobile communication. Short message service is the cheapest way of communication via the mobile phones. Therefore, security is very important in the short message service as well. This paper presents a method to maintain the security at user level. Different types of encryption methods are used to implement the user level security in mobile phones. Caesar cipher, Rail Fence, Vigenere cipher and RSA are used as encryption methods in this work. Caesar cipher and the Rail Fence methods are enhanced and implemented. The beauty in this work is that the user can select the encryption method and the key. Therefore, by changing the encryption method and the key time to time, the user can ensure the security of messages. By this work, while users can safely send/receive messages, they can save their information from unauthorised and unwanted people in their own mobile phone as well.Keywords: SMS, user level security, encryption, decryption, short message service, mobile communication
Procedia PDF Downloads 40117329 Factors Associated with Weight Loss Maintenance after an Intervention Program
Authors: Filipa Cortez, Vanessa Pereira
Abstract:
Introduction: The main challenge of obesity treatment is long-term weight loss maintenance. The 3 phases method is a weight loss program that combines a low carb and moderately high-protein diet, food supplements and a weekly one-to-one consultation with a certified nutritionist. Sustained weight control is the ultimate goal of phase 3. Success criterion was the minimum loss of 10% of initial weight and its maintenance after 12 months. Objective: The aim of this study was to identify factors associated with successful weight loss maintenance after 12 months at the end of 3 phases method. Methods: The study included 199 subjects that achieved their weight loss goal (phase 3). Weight and body mass index (BMI) were obtained at the baseline and every week until the end of the program. Therapeutic adherence was measured weekly on a Likert scale from 1 to 5. Subjects were considered in compliance with nutritional recommendation and supplementation when their classification was ≥ 4. After 12 months of the method, the current weight and number of previous weight-loss attempts were collected by telephone interview. The statistical significance was assumed at p-values < 0.05. Statistical analyses were performed using SPSS TM software v.21. Results: 65.3% of subjects met the success criterion. The factors which displayed a significant weight loss maintenance prediction were: greater initial percentage weight loss (OR=1.44) during the weight loss intervention and a higher number of consultations in phase 3 (OR=1.10). Conclusion: These findings suggest that the percentage weight loss during the weight loss intervention and the number of consultations in phase 3 may facilitate maintenance of weight loss after the 3 phases method.Keywords: obesity, weight maintenance, low-carbohydrate diet, dietary supplements
Procedia PDF Downloads 15417328 Use of Quasi-3D Inversion of VES Data Based on Lateral Constraints to Characterize the Aquifer and Mining Sites of an Area Located in the North-East of Figuil, North Cameroon
Authors: Fofie Kokea Ariane Darolle, Gouet Daniel Hervé, Koumetio Fidèle, Yemele David
Abstract:
The electrical resistivity method is successfully used in this paper in order to have a clearer picture of the subsurface of the North-East ofFiguil in northern Cameroon. It is worth noting that this method is most often used when the objective of the study is to image the shallow subsoils by considering them as a set of stratified ground layers. The problem to be solved is very often environmental, and in this case, it is necessary to perform an inversion of the data in order to have a complete and accurate picture of the parameters of the said layers. In the case of this work, thirty-three (33) Schlumberger VES have been carried out on an irregular grid to investigate the subsurface of the study area. The 1D inversion applied as a preliminary modeling tool and in correlation with the mechanical drillings results indicates a complex subsurface lithology distribution mainly consisting of marbles and schists. Moreover, the quasi-3D inversion with lateral constraint shows that the misfit between the observed field data and the model response is quite good and acceptable with a value low than 10%. The method also reveals existence of two water bearing in the considered area. The first is the schist or weathering aquifer (unsuitable), and the other is the marble or the fracturing aquifer (suitable). The final quasi 3D inversion results and geological models indicate proper sites for groundwaters prospecting and for mining exploitation, thus allowing the economic development of the study area.Keywords: electrical resistivity method, 1D inversion, quasi 3D inversion, groundwaters, mining
Procedia PDF Downloads 15917327 Laban Movement Analysis Using Kinect
Authors: Bernstein Ran, Shafir Tal, Tsachor Rachelle, Studd Karen, Schuster Assaf
Abstract:
Laban Movement Analysis (LMA), developed in the dance community over the past seventy years, is an effective method for observing, describing, notating, and interpreting human movement to enhance communication and expression in everyday and professional life. Many applications that use motion capture data might be significantly leveraged if the Laban qualities will be recognized automatically. This paper presents an automated recognition method of Laban qualities from motion capture skeletal recordings and it is demonstrated on the output of Microsoft’s Kinect V2 sensor.Keywords: Laban movement analysis, multitask learning, Kinect sensor, machine learning
Procedia PDF Downloads 34517326 Potential of Mineral Composition Reconstruction for Monitoring the Performance of an Iron Ore Concentration Plant
Authors: Maryam Sadeghi, Claude Bazin, Daniel Hodouin, Laura Perez Barnuevo
Abstract:
The performance of a separation process is usually evaluated using performance indices calculated from elemental assays readily available from the chemical analysis laboratory. However, the separation process performance is essentially related to the properties of the minerals that carry the elements and not those of the elements. Since elements or metals can be carried by valuable and gangue minerals in the ore and that each mineral responds differently to a mineral processing method, the use of only elemental assays could lead to erroneous or uncertain conclusions on the process performance. This paper discusses the advantages of using performance indices calculated from minerals content, such as minerals recovery, for process performance assessments. A method is presented that uses elemental assays to estimate the minerals content of the solids in various process streams. The method combines the stoichiometric composition of the minerals and constraints of mass conservation for the minerals through the concentration process to estimate the minerals content from elemental assays. The advantage of assessing a concentration process using mineral based performance indices is illustrated for an iron ore concentration circuit.Keywords: data reconciliation, iron ore concentration, mineral composition, process performance assessment
Procedia PDF Downloads 22417325 Three-Dimensional Computer Graphical Demonstration of Calcified Tissue and Its Clinical Significance
Authors: Itsuo Yokoyama, Rikako Kikuti, Miti Sekikawa, Tosinori Asai, Sarai Tsuyoshi
Abstract:
Introduction: Vascular access for hemodialysis therapy is often difficult, even for experienced medical personnel. Ultrasound guided needle placement have been performed occasionally but is not always helpful in certain cases with complicated vascular anatomy. Obtaining precise anatomical knowledge of the vascular structure is important to prevent access-related complications. With augmented reality (AR) device such as AR glasses, the virtual vascular structure is shown superimposed on the actual patient vessels, thus enabling the operator to maneuver catheter placement easily with free both hands. We herein report our method of AR guided vascular access method in dialysis treatment Methods: Three dimensional (3D) object of the arm with arteriovenous fistula is computer graphically created with 3D software from the data obtained by computer tomography, ultrasound echogram, and image scanner. The 3D vascular object thus created is viewed on the screen of the AR digital display device (such as AR glass or iPad). The picture of the vascular anatomical structure becomes visible, which is superimposed over the real patient’s arm, thereby the needle insertion be performed under the guidance of AR visualization with ease. By this method, technical difficulty in catheter placement for dialysis can be lessened and performed safely. Considerations: Virtual reality technology has been applied in various fields and medical use is not an exception. Yet AR devices have not been widely used among medical professions. Visualization of the virtual vascular object can be achieved by creation of accurate three dimensional object with the help of computer graphical technique. Although our experience is limited, this method is applicable with relative easiness and our accumulating evidence has suggested that our method of vascular access with the use of AR can be promising.Keywords: abdominal-aorta, calcification, extraskeletal, dialysis, computer graphics, 3DCG, CT, calcium, phosphorus
Procedia PDF Downloads 16817324 Dynamic Test for Sway-Mode Buckling of Columns
Authors: Boris Blostotsky, Elia Efraim
Abstract:
Testing of columns in sway mode is performed in order to determine the maximal allowable load limited by plastic deformations or their end connections and a critical load limited by columns stability. Motivation to determine accurate value of critical force is caused by its using as follow: - critical load is maximal allowable load for given column configuration and can be used as criterion of perfection; - it is used in calculation prescribed by standards for design of structural elements under combined action of compression and bending; - it is used for verification of theoretical analysis of stability at various end conditions of columns. In the present work a new non-destructive method for determination of columns critical buckling load in sway mode is proposed. The method allows performing measurements during the tests under loads that exceeds the columns critical load without losing its stability. The possibility of such loading is achieved by structure of the loading system. The system is performed as frame with rigid girder, one of the columns is the tested column and the other is additional two-hinged strut. Loading of the frame is carried out by the flexible traction element attached to the girder. The load applied on the tested column can achieve a values that exceed the critical load by choice of parameters of the traction element and the additional strut. The system lateral stiffness and the column critical load are obtained by the dynamic method. The experiment planning and the comparison between the experimental and theoretical values were performed based on the developed dependency of lateral stiffness of the system on vertical load, taking into account a semi-rigid connections of the column's ends. The agreement between the obtained results was established. The method can be used for testing of real full-size columns in industrial conditions.Keywords: buckling, columns, dynamic method, semi-rigid connections, sway mode
Procedia PDF Downloads 31617323 MBES-CARIS Data Validation for the Bathymetric Mapping of Shallow Water in the Kingdom of Bahrain on the Arabian Gulf
Authors: Abderrazak Bannari, Ghadeer Kadhem
Abstract:
The objectives of this paper are the validation and the evaluation of MBES-CARIS BASE surface data performance for bathymetric mapping of shallow water in the Kingdom of Bahrain. The latter is an archipelago with a total land area of about 765.30 km², approximately 126 km of coastline and 8,000 km² of marine area, located in the Arabian Gulf, east of Saudi Arabia and west of Qatar (26° 00’ N, 50° 33’ E). To achieve our objectives, bathymetric attributed grid files (X, Y, and depth) generated from the coverage of ship-track MBSE data with 300 x 300 m cells, processed with CARIS-HIPS, were downloaded from the General Bathymetric Chart of the Oceans (GEBCO). Then, brought into ArcGIS and converted into a raster format following five steps: Exportation of GEBCO BASE surface data to the ASCII file; conversion of ASCII file to a points shape file; extraction of the area points covering the water boundary of the Kingdom of Bahrain and multiplying the depth values by -1 to get the negative values. Then, the simple Kriging method was used in ArcMap environment to generate a new raster bathymetric grid surface of 30×30 m cells, which was the basis of the subsequent analysis. Finally, for validation purposes, 2200 bathymetric points were extracted from a medium scale nautical map (1:100 000) considering different depths over the Bahrain national water boundary. The nautical map was scanned, georeferenced and overlaid on the MBES-CARIS generated raster bathymetric grid surface (step 5 above), and then homologous depth points were selected. Statistical analysis, expressed as a linear error at the 95% confidence level, showed a strong correlation coefficient (R² = 0.96) and a low RMSE (± 0.57 m) between the nautical map and derived MBSE-CARIS depths if we consider only the shallow areas with depths of less than 10 m (about 800 validation points). When we consider only deeper areas (> 10 m) the correlation coefficient is equal to 0.73 and the RMSE is equal to ± 2.43 m while if we consider the totality of 2200 validation points including all depths, the correlation coefficient is still significant (R² = 0.81) with satisfactory RMSE (± 1.57 m). Certainly, this significant variation can be caused by the MBSE that did not completely cover the bottom in several of the deeper pockmarks because of the rapid change in depth. In addition, steep slopes and the rough seafloor probably affect the acquired MBSE raw data. In addition, the interpolation of missed area values between MBSE acquisition swaths-lines (ship-tracked sounding data) may not reflect the true depths of these missed areas. However, globally the results of the MBES-CARIS data are very appropriate for bathymetric mapping of shallow water areas.Keywords: bathymetry mapping, multibeam echosounder systems, CARIS-HIPS, shallow water
Procedia PDF Downloads 38417322 RBS Characteristic of Cd1−xZnxS Thin Film Fabricated by Vacuum Deposition Method
Authors: N. Dahbi, D. E. Arafah
Abstract:
Cd1−xZnxS thins films have been fabricated from ZnS/CdS/ZnS multilayer thin film systems, by using the vacuum deposition method; the Rutherford back-scattering (RBS) technique have been applied in order to determine the: structure, composition, depth profile, and stoichiometric of these films. The influence of the chemical and heat treatments on the produced films also have been investigated; the RBS spectra of the films showed that homogenous Cd1−xZnxS can be synthesized with x=0.45.Keywords: Cd1−xZnxS, chemical treatment, depth profile, heat treatment, RBS, RUMP simulation, thin film, vacuum deposition, ZnS/CdS/ZnS
Procedia PDF Downloads 22617321 Simultaneous Determination of Cefazolin and Cefotaxime in Urine by HPLC
Authors: Rafika Bibi, Khaled Khaladi, Hind Mokran, Mohamed Salah Boukhechem
Abstract:
A high performance liquid chromatographic method with ultraviolet detection at 264nm was developed and validate for quantitative determination and separation of cefazolin and cefotaxime in urine, the mobile phase consisted of acetonitrile and phosphate buffer pH4,2(15 :85) (v/v) pumped through ODB 250× 4,6 mm, 5um column at a flow rate of 1ml/min, loop of 20ul. In this condition, the validation of this technique showed that it is linear in a range of 0,01 to 10ug/ml with a good correlation coefficient ( R>0,9997), retention time of cefotaxime, cefazolin was 9.0, 10.1 respectively, the statistical evaluation of the method was examined by means of within day (n=6) and day to day (n=5) and was found to be satisfactory with high accuracy and precision.Keywords: cefazolin, cefotaxime, HPLC, bioscience, biochemistry, pharmaceutical
Procedia PDF Downloads 36617320 Verification of the Necessity of Maintenance Anesthesia with Isoflurane after Induction with Tiletamine-Zolazepam in Dogs Using the Dixon's up-and-down Method
Authors: Sonia Lachowska, Agnieszka Antonczyk, Joanna Tunikowska, Pawel Kucharski, Bartlomiej Liszka
Abstract:
Isoflurane is one of the most commonly used anaesthetic gases in veterinary medicine. Due to its numerous side effects, intravenous anaesthesia is more often used. The combination of tiletamine with zolazepam has proved to be a safe and pharmacologically beneficial combination. Analgesic effect, fast induction time, effective myorelaxation, and smooth recovery are the main advantages of this combination of drugs. In the following study, the authors verified the necessity of isoflurane to maintain anaesthesia in dogs after the use of tiletamine-zolazepam for induction. 12 dogs were selected to the group with the inclusion criteria: ASA (American Society of Anaesthesiology) I or II. Each dog received premedication intramuscularly with medetomidine-butorfanol (10 μg/kg, 0,1 mg/kg respectively). 15 minutes from premedication, preoxygenation lasting 5 minutes was started. Anaesthesia was induced with tiletamine-zolazepam at the dose of 5 mg/kg. Then the dogs were intubated and anaesthesia was maintained with isoflurane. Initially, MAC (Minimum Alveolar Concentration) was set to 0.7 vol.%. After 15 minutes equilibration, MAC was determined using Dixon’s up-and-down method. Painful stimulation including compressions of paw pad, phalange, groin area, and clamping Backhaus on skin. Hemodynamic and ventilation parameters were measured and noted in 2 minutes intervals. In this method, the positive or negative response to the noxious stimulus is estimated and then used to determine the concentration of isoflurane for next patient. The response is only assessed once in each patient. The results show that isoflurane is not necessary to maintain anaesthesia after tiletamine-zolazepam induction. This is clinically important because the side effects resulting from using isoflurane are eliminated.Keywords: anaesthesia, dog, Isoflurane, The Dixon's up-and-down method, Tiletamine, Zolazepam
Procedia PDF Downloads 18717319 Sol-Gel Derived ZnO Nanostructures: Optical Properties
Authors: Sheo K. Mishra, Rajneesh K. Srivastava, R. K. Shukla
Abstract:
In the present work, we report on the optical properties including UV-vis absorption and photoluminescence (PL) of ZnO nanostructures synthesized by sol-gel method. Structural and morphological investigations have been performed by X-ray diffraction method (XRD) and scanning electron microscopy (SEM). The XRD result confirms the formation of hexagonal wurtzite phase of ZnO nanostructures. The presence of various diffraction peaks suggests polycrystalline nature. The XRD pattern exhibits no additional peak due to by-products such as Zn(OH)2. The average crystallite size of prepared ZnO sample corresponding to the maximum intensity peaks is to be ~38.22 nm. The SEM micrograph shows different nanostructures of pure ZnO. Photoluminescence (PL) spectrum shows several emission peaks around 353 nm, 382 nm, 419 nm, 441 nm, 483 nm and 522 nm. The obtained results suggest that the prepared phosphors are quite suitable for optoelectronic applications.Keywords: ZnO, sol-gel, XRD, PL
Procedia PDF Downloads 40617318 Prediction and Analysis of Human Transmembrane Transporter Proteins Based on SCM
Authors: Hui-Ling Huang, Tamara Vasylenko, Phasit Charoenkwan, Shih-Hsiang Chiu, Shinn-Ying Ho
Abstract:
The knowledge of the human transporters is still limited due to technically demanding procedure of crystallization for the structural characterization of transporters by spectroscopic methods. It is desirable to develop bioinformatics tools for effective analysis of available sequences in order to identify human transmembrane transporter proteins (HMTPs). This study proposes a scoring card method (SCM) based method for predicting HMTPs. We estimated a set of propensity scores of dipeptides to be HMTPs using SCM from the training dataset (HTS732) consisting of 366 HMTPs and 366 non-HMTPs. SCM using the estimated propensity scores of 20 amino acids and 400 dipeptides -as HMTPs, has a training accuracy of 87.63% and a test accuracy of 66.46%. The five top-ranked dipeptides include LD, NV, LI, KY, and MN with scores 996, 992, 989, 987, and 985, respectively. Five amino acids with the highest propensity scores are Ile, Phe, Met, Gly, and Leu, that hydrophobic residues are mostly highly-scored. Furthermore, obtained propensity scores were used to analyze physicochemical properties of human transporters.Keywords: dipeptide composition, physicochemical property, human transmembrane transporter proteins, human transmembrane transporters binding propensity, scoring card method
Procedia PDF Downloads 37117317 Frequency Identification of Wiener-Hammerstein Systems
Authors: Brouri Adil, Giri Fouad
Abstract:
The problem of identifying Wiener-Hammerstein systems is addressed in the presence of two linear subsystems of structure totally unknown. Presently, the nonlinear element is allowed to be noninvertible. The system identification problem is dealt by developing a two-stage frequency identification method such a set of points of the nonlinearity are estimated first. Then, the frequency gains of the two linear subsystems are determined at a number of frequencies. The method involves Fourier series decomposition and only requires periodic excitation signals. All involved estimators are shown to be consistent.Keywords: Wiener-Hammerstein systems, Fourier series expansions, frequency identification, automation science
Procedia PDF Downloads 53817316 Non-Linear Regression Modeling for Composite Distributions
Authors: Mostafa Aminzadeh, Min Deng
Abstract:
Modeling loss data is an important part of actuarial science. Actuaries use models to predict future losses and manage financial risk, which can be beneficial for marketing purposes. In the insurance industry, small claims happen frequently while large claims are rare. Traditional distributions such as Normal, Exponential, and inverse-Gaussian are not suitable for describing insurance data, which often show skewness and fat tails. Several authors have studied classical and Bayesian inference for parameters of composite distributions, such as Exponential-Pareto, Weibull-Pareto, and Inverse Gamma-Pareto. These models separate small to moderate losses from large losses using a threshold parameter. This research introduces a computational approach using a nonlinear regression model for loss data that relies on multiple predictors. Simulation studies were conducted to assess the accuracy of the proposed estimation method. The simulations confirmed that the proposed method provides precise estimates for regression parameters. It's important to note that this approach can be applied to datasets if goodness-of-fit tests confirm that the composite distribution under study fits the data well. To demonstrate the computations, a real data set from the insurance industry is analyzed. A Mathematica code uses the Fisher information algorithm as an iteration method to obtain the maximum likelihood estimation (MLE) of regression parameters.Keywords: maximum likelihood estimation, fisher scoring method, non-linear regression models, composite distributions
Procedia PDF Downloads 4017315 Research of Concentratibility of Low Quality Bauxite Raw Materials
Authors: Nadezhda Nikolaeva, Tatyana Alexandrova, Alexandr Alexandrov
Abstract:
Processing of high-silicon bauxite on the base of the traditional clinkering method is related to high power consumption and capital investments, which makes production of alumina from those ores non-competitive in terms of basic economic showings. For these reasons, development of technological solutions enabling to process bauxites with various chemical and mineralogical structures efficiently with low level of thermal power consumption is important. Flow sheet of the studies on washability of ores from the Timanskoe and the Severo-Onezhskoe deposits is on the base of the flotation method.Keywords: low-quality bauxite, resource-saving technology, optimization, aluminum, conditioning of composition, separation characteristics
Procedia PDF Downloads 29217314 An Improved K-Means Algorithm for Gene Expression Data Clustering
Authors: Billel Kenidra, Mohamed Benmohammed
Abstract:
Data mining technique used in the field of clustering is a subject of active research and assists in biological pattern recognition and extraction of new knowledge from raw data. Clustering means the act of partitioning an unlabeled dataset into groups of similar objects. Each group, called a cluster, consists of objects that are similar between themselves and dissimilar to objects of other groups. Several clustering methods are based on partitional clustering. This category attempts to directly decompose the dataset into a set of disjoint clusters leading to an integer number of clusters that optimizes a given criterion function. The criterion function may emphasize a local or a global structure of the data, and its optimization is an iterative relocation procedure. The K-Means algorithm is one of the most widely used partitional clustering techniques. Since K-Means is extremely sensitive to the initial choice of centers and a poor choice of centers may lead to a local optimum that is quite inferior to the global optimum, we propose a strategy to initiate K-Means centers. The improved K-Means algorithm is compared with the original K-Means, and the results prove how the efficiency has been significantly improved.Keywords: microarray data mining, biological pattern recognition, partitional clustering, k-means algorithm, centroid initialization
Procedia PDF Downloads 19317313 Finite Element Analysis of Oil-Lubricated Elliptical Journal Bearings
Authors: Marco Tulio C. Faria
Abstract:
Fixed-geometry hydrodynamic journal bearings are one of the best supporting systems for several applications of rotating machinery. Cylindrical journal bearings present excellent load-carrying capacity and low manufacturing costs, but they are subjected to the oil-film instability at high speeds. An attempt of overcoming this instability problem has been the development of non-circular journal bearings. This work deals with an analysis of oil-lubricated elliptical journal bearings using the finite element method. Steady-state and dynamic performance characteristics of elliptical bearings are rendered by zeroth- and first-order lubrication equations obtained through a linearized perturbation method applied on the classical Reynolds equation. Four-node isoparametric rectangular finite elements are employed to model the bearing thin film flow. Curves of elliptical bearing load capacity and dynamic force coefficients are rendered at several operating conditions. The results presented in this work demonstrate the influence of the bearing ellipticity on its performance at different loading conditions.Keywords: elliptical journal bearings, non-circular journal bearings, hydrodynamic bearings, finite element method
Procedia PDF Downloads 45117312 Wave Transmitting Boundary in Dynamic Analysis for an Elastoplastic Medium Using the Material Point Method
Authors: Chinh Phuong Do
Abstract:
Dynamic analysis of slope under seismic condition requires the elimination of spurious reflection at the bounded domain. This paper studies the performances of wave transmitting boundaries, including the standard viscous boundary and the viscoelastic boundary to the material point method (MPM) framework. First, analytical derivations of these non-reflecting conditions particularly to the implicit MPM are presented. Then, a number of benchmark and geotechnical examples will be shown. Overall, the results agree well with analytical solutions, indicating the ability to accurately simulate the radiation at the bounded domain.Keywords: dynamic analysis, implicit, MPM, non-reflecting boundary
Procedia PDF Downloads 20717311 Spectral Quasi Linearization Techniques for the Solution of Time Fractional Diffusion Wave Equations in Boundary Value Problems
Authors: Kizito Ugochukwu Nwajeria
Abstract:
This paper presents a spectral quasi-linearization technique (SQLT) for solving time fractional diffusion wave equations in boundary value problems. The proposed method integrates spectral approximations for spatial derivatives with a quasi-linearization approach to address the nonlinearity introduced by fractional time derivatives. Time fractional differential equations typically formulated using Caputo or Riemann-Liouville derivatives, model complex phenomena such as anomalous diffusion and wave propagation, which are not captured by classical integer-order models. The SQLT method iteratively linearizes the nonlinear terms at each time step, transforming the original problem into a series of linear subproblems, which can be efficiently solved. Using high-order spectral methods such as Chebyshev or Legendre polynomials for spatial discretization, the technique achieves high accuracy in approximating the solution. A convergence analysis is provided, demonstrating the method's efficiency and establishing error bounds. Numerical experiments on a range of test problems confirm the effectiveness of SQLT in solving fractional diffusion wave equations with various boundary conditions. The method offers a robust framework for addressing time fractional differential equations in diverse fields, including materials science, bioengineering, and anomalous transport phenomena.Keywords: spectral methods, quasilinearization, time-fractional diffusion-wave equations, boundary value problems, fractional calculus
Procedia PDF Downloads 1717310 Design of Compact UWB Multilayered Microstrip Filter with Wide Stopband
Authors: N. Azadi-Tinat, H. Oraizi
Abstract:
Design of compact UWB multilayered microstrip filter with E-shape resonator is presented, which provides wide stopband up to 20 GHz and arbitrary impedance matching. The design procedure is developed based on the method of least squares and theory of N-coupled transmission lines. The dimensions of designed filter are about 11 mm × 11 mm and the three E-shape resonators are placed among four dielectric layers. The average insertion loss in the passband is less than 1 dB and in the stopband is about 30 dB up to 20 GHz. Its group delay in the UWB region is about 0.5 ns. The performance of the optimized filter design perfectly agrees with the microwave simulation softwares.Keywords: method of least square, multilayer microstrip filter, n-coupled transmission lines, ultra-wideband
Procedia PDF Downloads 39417309 Elastic and Plastic Collision Comparison Using Finite Element Method
Authors: Gustavo Rodrigues, Hans Weber, Larissa Driemeier
Abstract:
The prevision of post-impact conditions and the behavior of the bodies during the impact have been object of several collision models. The formulation from Hertz’s theory is generally used dated from the 19th century. These models consider the repulsive force as proportional to the deformation of the bodies under contact and may consider it proportional to the rate of deformation. The objective of the present work is to analyze the behavior of the bodies during impact using the Finite Element Method (FEM) with elastic and plastic material models. The main parameters to evaluate are, the contact force, the time of contact and the deformation of the bodies. An advantage of using the FEM approach is the possibility to apply a plastic deformation to the model according to the material definition: there will be used Johnson–Cook plasticity model whose parameters are obtained through empirical tests of real materials. This model allows analyzing the permanent deformation caused by impact, phenomenon observed in real world depending on the forces applied to the body. These results are compared between them and with the model-based Hertz theory.Keywords: collision, impact models, finite element method, Hertz Theory
Procedia PDF Downloads 17717308 Report of Happiness in the Iranian Educational System: A Qualitative Research
Authors: Babak Shamshiri, Najme Dastouri
Abstract:
The purpose of this study is to understand the current situation of happiness in the Iranian educational system from the perspective of students, teachers and educational administrators. This research is done in qualitative paradigm. Data collection is done by in-depth interview method. Research participants were selected purposively according to sampling rules, with maximum variation and reaching the saturation point. According to most participants in this study, schools in Iran are not usually happy. This lack of happiness is associated with and related to the educational system, curriculum, teaching method, physical environment of schools and their facilities.Keywords: happiness, Iran, educational system, qualitative study
Procedia PDF Downloads 23417307 Hydroxyapatite from Biowaste for the Reinforcement of Polymer
Authors: John O. Akindoyo, M. D. H. Beg, Suriati Binti Ghazali, Nitthiyah Jeyaratnam
Abstract:
Regeneration of bone due to the many health challenges arising from traumatic effects of bone loss, bone tumours and other bone infections is fast becoming indispensable. Over the period of time, some approaches have been undertaken to mitigate this challenge. This includes but not limited to xenografts, allografts, autografts as well as artificial substitutions like bioceramics, synthetic cements and metals. However, most of these techniques often come along with peculiar limitation and problems such as morbidity, availability, disease transmission, collateral site damage or absolute rejection by the body as the case may be. Hydroxyapatite (HA) is very compatible and suitable for this application. However, most of the common methods for HA synthesis are expensive and environmentally unfriendly. Extraction of HA from bio-wastes have been perceived not only to be cost effective, but also environment-friendly. In this research, HA was produced from bio-waste: namely bovine bones through a combination of hydrothermal chemical processes and ordinary calcination techniques. Structure and property of the HA was carried out through different characterization techniques (such as TGA, FTIR, DSC, XRD and BET). The synthesized HA was found to possess similar properties to stoichiometric HA with highly desirable thermal, degradation, structural and porous properties. This material is unique for its potential minimal cost, environmental friendliness and property controllability. It is also perceived to be suitable for tissue and bone engineering applications.Keywords: biomaterial, biopolymer, bone, hydroxyapatite
Procedia PDF Downloads 32617306 Aquatic Sediment and Honey of Apis mellifera as Bioindicators of Pesticide Residues
Authors: Luana Guerra, Silvio C. Sampaio, Vladimir Pavan Margarido, Ralpho R. Reis
Abstract:
Brazil is the world's largest consumer of pesticides. The excessive use of these compounds has negative impacts on animal and human life, the environment, and food security. Bees, crucial for pollination, are exposed to pesticides during the collection of nectar and pollen, posing risks to their health and the food chain, including honey contamination. Aquatic sediments are also affected, impacting water quality and the microbiota. Therefore, the analysis of aquatic sediments and bee honey is essential to identify environmental contamination and monitor ecosystems. The aim of this study was to use samples of honey from honeybees (Apis mellifera) and aquatic sediment as bioindicators of environmental contamination by pesticides and their relationship with agricultural use in the surrounding areas. The sample collections of sediment and honey were carried out in two stages. The first stage was conducted in the Bituruna municipality region in the second half of the year 2022, and the second stage took place in the regions of Laranjeiras do Sul, Quedas do Iguaçu, and Nova Laranjeiras in the first half of the year 2023. In total, 10 collection points were selected, with 5 points in the first stage and 5 points in the second stage, where one sediment sample and one honey sample were collected for each point, totaling 20 samples. The honey and sediment samples were analyzed at the Laboratory of the Paraná Institute of Technology, with ten samples of honey and ten samples of sediment. The selected extraction method was QuEChERS, and the analysis of the components present in the sample was performed using liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). The pesticides Azoxystrobin, Epoxiconazole, Boscalid, Carbendazim, Haloxifope, Fomesafen, Fipronil, Chlorantraniliprole, Imidacloprid, and Bifenthrin were detected in the sediment samples from the study area in Laranjeiras do Sul, Paraná, with Carbendazim being the compound with the highest concentration (0.47 mg/kg). The honey samples obtained from the apiaries showed satisfactory results, as they did not show any detection or quantification of the analyzed pesticides, except for Point 9, which had the fungicide tebuconazole but with a concentration