Search results for: dental age estimation
747 Increasing Business Competitiveness in Georgia in Terms of Globalization
Authors: Badri Gechbaia, Levan Gvarishvili
Abstract:
Despite the fact that a lot of Georgian scientists have worked on the issue of the business competitiveness, it think that it is necessary to deepen the works in this sphere, it is necessary also to perfect the methodology in the estimation of the business competitiveness, we have to display the main factors which define the competitive advantages in the business sphere, we have also to establish the interconnections between the business competitiveness level and the quality of states economical involvement in the international economic processes, we have to define the ways to rise the business competitiveness and its role in the upgrading of countries economic development. The introduction part justifies the actuality of the studied topic and the thesis; It defines the survey subject, the object, and the goals with relevant objectives; theoretical-methodological and informational-statistical base for the survey; what is new in the survey and what the value for its theoretical and practical application is. The aforementioned study is an effort to raise public awareness on this issue. Analysis of the fundamental conditions for the efficient functioning of business in Georgia, identification of reserves for increasing its efficiency based on the assessment of the strengths and weaknesses of the business sector. Methods of system analysis, abstract-logic, induction and deduction, synthesis and generalization, and positive, normative, and comparative analysis are used in the research process. Specific regularities of the impact of the globalization process on the determinants of business competitiveness are established. The reasons for business competitiveness in Georgia have been identifiedKeywords: competitiveness, methodology, georgian, economic
Procedia PDF Downloads 112746 Different Sampling Schemes for Semi-Parametric Frailty Model
Authors: Nursel Koyuncu, Nihal Ata Tutkun
Abstract:
Frailty model is a survival model that takes into account the unobserved heterogeneity for exploring the relationship between the survival of an individual and several covariates. In the recent years, proposed survival models become more complex and this feature causes convergence problems especially in large data sets. Therefore selection of sample from these big data sets is very important for estimation of parameters. In sampling literature, some authors have defined new sampling schemes to predict the parameters correctly. For this aim, we try to see the effect of sampling design in semi-parametric frailty model. We conducted a simulation study in R programme to estimate the parameters of semi-parametric frailty model for different sample sizes, censoring rates under classical simple random sampling and ranked set sampling schemes. In the simulation study, we used data set recording 17260 male Civil Servants aged 40–64 years with complete 10-year follow-up as population. Time to death from coronary heart disease is treated as a survival-time and age, systolic blood pressure are used as covariates. We select the 1000 samples from population using different sampling schemes and estimate the parameters. From the simulation study, we concluded that ranked set sampling design performs better than simple random sampling for each scenario.Keywords: frailty model, ranked set sampling, efficiency, simple random sampling
Procedia PDF Downloads 209745 Development and Validation of a HPLC Method for 6-Gingerol and 6-Shogaol in Joint Pain Relief Gel Containing Ginger (Zingiber officinale)
Authors: Tanwarat Kajsongkram, Saowalux Rotamporn, Sirinat Limbunruang, Sirinan Thubthimthed.
Abstract:
High-Performance Liquid Chromatography (HPLC) method was developed and validated for simultaneous estimation of 6-Gingerol(6G) and 6-Shogaol(6S) in joint pain relief gel containing ginger extract. The chromatographic separation was achieved by using C18 column, 150 x 4.6mm i.d., 5μ Luna, mobile phase containing acetonitrile and water (gradient elution). The flow rate was 1.0 ml/min and the absorbance was monitored at 282 nm. The proposed method was validated in terms of the analytical parameters such as specificity, accuracy, precision, linearity, range, limit of detection (LOD), limit of quantification (LOQ), and determined based on the International Conference on Harmonization (ICH) guidelines. The linearity ranges of 6G and 6S were obtained over 20-60 and 6-18 µg/ml respectively. Good linearity was observed over the above-mentioned range with linear regression equation Y= 11016x- 23778 for 6G and Y = 19276x-19604 for 6S (x is concentration of analytes in μg/ml and Y is peak area). The value of correlation coefficient was found to be 0.9994 for both markers. The limit of detection (LOD) and limit of quantification (LOQ) for 6G were 0.8567 and 2.8555 µg/ml and for 6S were 0.3672 and 1.2238 µg/ml respectively. The recovery range for 6G and 6S were found to be 91.57 to 102.36 % and 84.73 to 92.85 % for all three spiked levels. The RSD values from repeated extractions for 6G and 6S were 3.43 and 3.09% respectively. The validation of developed method on precision, accuracy, specificity, linearity, and range were also performed with well-accepted results.Keywords: ginger, 6-gingerol, HPLC, 6-shogaol
Procedia PDF Downloads 440744 Studying the Effects of Economic and Financial Development as Well as Institutional Quality on Environmental Destruction in the Upper-Middle Income Countries
Authors: Morteza Raei Dehaghi, Seyed Mohammad Mirhashemi
Abstract:
The current study explored the effect of economic development, financial development and institutional quality on environmental destruction in upper-middle income countries during the time period of 1999-2011. The dependent variable is logarithm of carbon dioxide emissions that can be considered as an index for destruction or quality of the environment given to its effects on the environment. Financial development and institutional development variables as well as some control variables were considered. In order to study cross-sectional correlation among the countries under study, Pesaran and Friz test was used. Since the results of both tests show cross-sectional correlation in the countries under study, seemingly unrelated regression method was utilized for model estimation. The results disclosed that Kuznets’ environmental curve hypothesis is confirmed in upper-middle income countries and also, financial development and institutional quality have a significant effect on environmental quality. The results of this study can be considered by policy makers in countries with different income groups to have access to a growth accompanied by improved environmental quality.Keywords: economic development, environmental destruction, financial development, institutional development, seemingly unrelated regression
Procedia PDF Downloads 345743 Estimation of Probabilistic Fatigue Crack Propagation Models of AZ31 Magnesium Alloys under Various Load Ratio Conditions by Using the Interpolation of a Random Variable
Authors: Seon Soon Choi
Abstract:
The essential purpose is to present the good fatigue crack propagation model describing a stochastic fatigue crack growth behavior in a rolled magnesium alloy, AZ31, under various load ratio conditions. Fatigue crack propagation experiments were carried out in laboratory air under four conditions of load ratio, R, using AZ31 to investigate the crack growth behavior. The stochastic fatigue crack growth behavior was analyzed using an interpolation of random variable, Z, introduced to an empirical fatigue crack propagation model. The empirical fatigue models used in this study are Paris-Erdogan model, Walker model, Forman model, and modified Forman model. It was found that the random variable is useful in describing the stochastic fatigue crack growth behaviors under various load ratio conditions. The good probabilistic model describing a stochastic fatigue crack growth behavior under various load ratio conditions was also proposed.Keywords: magnesium alloys, fatigue crack propagation model, load ratio, interpolation of random variable
Procedia PDF Downloads 409742 Use of Gaussian-Euclidean Hybrid Function Based Artificial Immune System for Breast Cancer Diagnosis
Authors: Cuneyt Yucelbas, Seral Ozsen, Sule Yucelbas, Gulay Tezel
Abstract:
Due to the fact that there exist only a small number of complex systems in artificial immune system (AIS) that work out nonlinear problems, nonlinear AIS approaches, among the well-known solution techniques, need to be developed. Gaussian function is usually used as similarity estimation in classification problems and pattern recognition. In this study, diagnosis of breast cancer, the second type of the most widespread cancer in women, was performed with different distance calculation functions that euclidean, gaussian and gaussian-euclidean hybrid function in the clonal selection model of classical AIS on Wisconsin Breast Cancer Dataset (WBCD), which was taken from the University of California, Irvine Machine-Learning Repository. We used 3-fold cross validation method to train and test the dataset. According to the results, the maximum test classification accuracy was reported as 97.35% by using of gaussian-euclidean hybrid function for fold-3. Also, mean of test classification accuracies for all of functions were obtained as 94.78%, 94.45% and 95.31% with use of euclidean, gaussian and gaussian-euclidean, respectively. With these results, gaussian-euclidean hybrid function seems to be a potential distance calculation method, and it may be considered as an alternative distance calculation method for hard nonlinear classification problems.Keywords: artificial immune system, breast cancer diagnosis, Euclidean function, Gaussian function
Procedia PDF Downloads 433741 Tool Wear Monitoring of High Speed Milling Based on Vibratory Signal Processing
Authors: Hadjadj Abdechafik, Kious Mecheri, Ameur Aissa
Abstract:
The objective of this study is to develop a process of treatment of the vibratory signals generated during a horizontal high speed milling process without applying any coolant in order to establish a monitoring system able to improve the machining performance. Thus, many tests were carried out on the horizontal high speed centre (PCI Météor 10), in given cutting conditions, by using a milling cutter with only one insert and measured its frontal wear from its new state that is considered as a reference state until a worn state that is considered as unsuitable for the tool to be used. The results obtained show that the first harmonic follow well the evolution of frontal wear, on another hand a wavelet transform is used for signal processing and is found to be useful for observing the evolution of the wavelet approximations through the cutting tool life. The power and the Root Mean Square (RMS) values of the wavelet transformed signal gave the best results and can be used for tool wear estimation. All this features can constitute the suitable indicators for an effective detection of tool wear and then used for the input parameters of an online monitoring system. Although we noted the remarkable influence of the machining cycle on the quality of measurements by the introduction of a bias on the signal, this phenomenon appears in particular in horizontal milling and in the majority of studies is ignored.Keywords: flank wear, vibration, milling, signal processing, monitoring
Procedia PDF Downloads 596740 IPO Valuation and Profitability Expectations: Evidence from the Italian Exchange
Authors: Matteo Bonaventura, Giancarlo Giudici
Abstract:
This paper analyses the valuation process of companies listed on the Italian Exchange in the period 2000-2009 at their Initial Public Offering (IPO). One the most common valuation techniques declared in the IPO prospectus to determine the offer price is the Discounted Cash Flow (DCF) method. We develop a ‘reverse engineering’ model to discover the short term profitability implied in the offer prices. We show that there is a significant optimistic bias in the estimation of future profitability compared to ex-post actual realization and the mean forecast error is substantially large. Yet we show that such error characterizes also the estimations carried out by analysts evaluating non-IPO companies. The forecast error is larger the faster has been the recent growth of the company, the higher is the leverage of the IPO firm, the more companies issued equity on the market. IPO companies generally exhibit better operating performance before the listing, with respect to comparable listed companies, while after the flotation they do not perform significantly different in term of return on invested capital. Pre-IPO book building activity plays a significant role in partially reducing the forecast error and revising expectations, while the market price of the first day of trading does not contain information for further reducing forecast errors.Keywords: initial public offerings, DCF, book building, post-IPO profitability drop
Procedia PDF Downloads 351739 Jamun Juice Extraction Using Commercial Enzymes and Optimization of the Treatment with the Help of Physicochemical, Nutritional and Sensory Properties
Authors: Payel Ghosh, Rama Chandra Pradhan, Sabyasachi Mishra
Abstract:
Jamun (Syzygium cuminii L.) is one of the important indigenous minor fruit with high medicinal value. The jamun cultivation is unorganized and there is huge loss of this fruit every year. The perishable nature of the fruit makes its postharvest management further difficult. Due to the strong cell wall structure of pectin-protein bonds and hard seeds, extraction of juice becomes difficult. Enzymatic treatment has been commercially used for improvement of juice quality with high yield. The objective of the study was to optimize the best treatment method for juice extraction. Enzymes (Pectinase and Tannase) from different stains had been used and for each enzyme, best result obtained by using response surface methodology. Optimization had been done on the basis of physicochemical property, nutritional property, sensory quality and cost estimation. According to quality aspect, cost analysis and sensory evaluation, the optimizing enzymatic treatment was obtained by Pectinase from Aspergillus aculeatus strain. The optimum condition for the treatment was 44 oC with 80 minute with a concentration of 0.05% (w/w). At these conditions, 75% of yield with turbidity of 32.21NTU, clarity of 74.39%T, polyphenol content of 115.31 mg GAE/g, protein content of 102.43 mg/g have been obtained with a significant difference in overall acceptability.Keywords: enzymatic treatment, Jamun, optimization, physicochemical property, sensory analysis
Procedia PDF Downloads 295738 Modelling Home Appliances for Energy Management System: Comparison of Simulation Results with Measurements
Authors: Aulon Shabani, Denis Panxhi, Orion Zavalani
Abstract:
This paper presents the modelling and development of a simulator for residential electrical appliances. The simulator is developed on MATLAB providing the possibility to analyze and simulate energy consumption of frequently used home appliances in Albania. Modelling of devices considers the impact of different factors, mentioning occupant behavior and climacteric conditions. Most devices are modeled as an electric circuit, and the electric energy consumption is estimated by the solutions of the guiding differential equations. The provided models refer to devices like a dishwasher, oven, water heater, air conditioners, light bulbs, television, refrigerator water, and pump. The proposed model allows us to simulate beforehand the energetic behavior of the largest consumption home devices to estimate peak consumption and improving its reduction. Simulated home prototype results are compared to real measurement of a considered typical home. Obtained results from simulator framework compared to monitored typical household using EmonTxV3 show the effectiveness of the proposed simulation. This conclusion will help for future simulation of a large group of typical household for a better understanding of peak consumption.Keywords: electrical appliances, energy management, modelling, peak estimation, simulation, smart home
Procedia PDF Downloads 161737 Whole Coding Genome Inter-Clade Comparisons to Predict Global Cancer-Protecting Variants
Authors: Lamis Naddaf, Yuval Tabach
Abstract:
We identified missense genetic variants with the potential to enhance resistance against cancer. Such a field has not been widely explored as researchers tend to investigate the mutations that cause diseases, in response to the suffering of patients, rather than those mutations that protect from them. In conjunction with the genomic revolution and the advances in genetic engineering and synthetic biology, identifying the protective variants will increase the power of genotype-phenotype predictions and have significant implications for improved risk estimation, diagnostics, prognosis, and even personalized therapy and drug discovery. To approach our goal, we systematically investigated the sites of the coding genomes and selected the alleles that showed a correlation with the species’ cancer resistance. Interestingly, we found several amino acids that are more generally preferred (like the Proline) or avoided (like the Cysteine) by the resistant species. Furthermore, Cancer resistance in mammals and reptiles is significantly predicted by the number of the predicted protecting variants (PVs) a species has. Moreover, PVs-enriched-genes are enriched in pathways relevant to tumor suppression. For example, they are enriched in the Hedgehog signaling and silencing pathways, which its improper activation is associated with the most common form of cancer malignancy. We also showed that the PVs are mostly more abundant in healthy people compared to cancer patients within different human races.Keywords: cancer resistance, protecting variant, naked mole rat, comparative genomics
Procedia PDF Downloads 109736 Estimation of Lungs Physiological Motion for Patient Undergoing External Lung Irradiation
Authors: Yousif Mohamed Y. Abdallah
Abstract:
This is an experimental study deals with detection, measurement and analysis of the periodic physiological organ motion during external beam radiotherapy; to improve the accuracy of the radiation field placement, and to reduce the exposure of healthy tissue during radiation treatments. The importance of this study is to detect the maximum path of the mobile structures during radiotherapy delivery, to define the planning target volume (PTV) and irradiated volume during both inspiration and expiration period and to verify the target volume. In addition to its role to highlight the importance of the application of Intense Guided Radiotherapy (IGRT) methods in the field of radiotherapy. The results showed (body contour was equally (3.17 + 0.23 mm), for left lung displacement reading (2.56 + 0.99 mm) and right lung is (2.42 + 0.77 mm) which the radiation oncologist to take suitable countermeasures in case of significant errors. In addition, the use of the image registration technique for automatic position control is predicted potential motion. The motion ranged between 2.13 mm and 12.2 mm (low and high). In conclusion, individualized assessment of tumor mobility can improve the accuracy of target areas definition in patients undergo Sterostatic RT for stage I, II and III lung cancer (NSCLC). Definition of the target volume based on a single CT scan with a margin of 10 mm is clearly inappropriate.Keywords: respiratory motion, external beam radiotherapy, image processing, lung
Procedia PDF Downloads 533735 Factors Affecting the Profitability of Commercial Banks: An Empirical Study of Indian Banking Sector
Authors: Neeraj Gupta, Jitendra Mahakud
Abstract:
The banking system plays a major role in the Indian economy. Banking system is the payment gateway of most of the financial transactions. Banking has gone a major transition that is still in progress. Recent banking reforms after liberalization in 1991 have led to the establishment of the foreign banks in the country. The foreign banks are not listed in the Indian stock markets and have increased the competition leading to the capture of the significant share in the revenue from the public sector banks which are still the major players in the Indian banking sector. The performance of the banking sector depends on the internal (bank specific) as well as the external (market specific and macroeconomic) factors. Profitability in banking sector is affected by numerous factors which can be internal or external. The present study examines these internal and external factors which are likely to effect the profitablilty of the Indian banks. The sample consists of a panel dataset of 64 commercial banks in India, consisting of 1088 observations over the years from 1998 to 2016. The GMM dynamic panel estimation given by Arellano and Bond has been used. The study revealed that the variables capital adequacy ratio, deposit, age, labour productivity, non-performing asset, inflation and concentration have significant effect on performance measured.Keywords: banks in India, bank performance, bank productivity, banking management
Procedia PDF Downloads 271734 Production and Leftovers Usage Policies to Minimize Food Waste under Uncertain and Correlated Demand
Authors: Esma Birisci, Ronald McGarvey
Abstract:
One of the common problems in food service industry is demand uncertainty. This research presents a multi-criteria optimization approach to identify the efficient frontier of points lying between the minimum-waste and minimum-shortfall solutions within uncertain demand environment. It also addresses correlation across demands for items (e.g., hamburgers are often demanded with french fries). Reducing overproduction food waste (and its corresponding environmental impacts) and an aversion to shortfalls (leave some customer hungry) need to consider as two contradictory objectives in an all-you-care-to-eat environment food service operation. We identify optimal production adjustments relative to demand forecasts, demand thresholds for utilization of leftovers, and percentages of demand to be satisfied by leftovers, considering two alternative metrics for overproduction waste: mass; and greenhouse gas emissions. Demand uncertainty and demand correlations are addressed using a kernel density estimation approach. A statistical analysis of the changes in decision variable values across each of the efficient frontiers can then be performed to identify the key variables that could be modified to reduce the amount of wasted food at minimal increase in shortfalls. We illustrate our approach with an application to empirical data from Campus Dining Services operations at the University of Missouri.Keywords: environmental studies, food waste, production planning, uncertain and correlated demand
Procedia PDF Downloads 372733 The Underestimate of the Annual Maximum Rainfall Depths Due to Coarse Time Resolution Data
Authors: Renato Morbidelli, Carla Saltalippi, Alessia Flammini, Tommaso Picciafuoco, Corrado Corradini
Abstract:
A considerable part of rainfall data to be used in the hydrological practice is available in aggregated form within constant time intervals. This can produce undesirable effects, like the underestimate of the annual maximum rainfall depth, Hd, associated with a given duration, d, that is the basic quantity in the development of rainfall depth-duration-frequency relationships and in determining if climate change is producing effects on extreme event intensities and frequencies. The errors in the evaluation of Hd from data characterized by a coarse temporal aggregation, ta, and a procedure to reduce the non-homogeneity of the Hd series are here investigated. Our results indicate that: 1) in the worst conditions, for d=ta, the estimation of a single Hd value can be affected by an underestimation error up to 50%, while the average underestimation error for a series with at least 15-20 Hd values, is less than or equal to 16.7%; 2) the underestimation error values follow an exponential probability density function; 3) each very long time series of Hd contains many underestimated values; 4) relationships between the non-dimensional ratio ta/d and the average underestimate of Hd, derived from continuous rainfall data observed in many stations of Central Italy, may overcome this issue; 5) these equations should allow to improve the Hd estimates and the associated depth-duration-frequency curves at least in areas with similar climatic conditions.Keywords: central Italy, extreme events, rainfall data, underestimation errors
Procedia PDF Downloads 189732 Qualitative and Quantitative Traits of Processed Farmed Fish in N. W. Greece
Authors: Cosmas Nathanailides, Fotini Kakali, Kostas Karipoglou
Abstract:
The filleting yield and the chemical composition of farmed sea bass (Dicentrarchus labrax); rainbow trout (Oncorynchus mykiss) and meagre (Argyrosomus regius) was investigated in farmed fish in NW Greece. The results provide an estimate of the quantity of fish required to produce one kilogram of fillet weight, an estimation which is required for the operational management of fish processing companies. Furthermore in this work, the ratio of feed input required to produce one kilogram of fish fillet (FFCR) is presented for the first time as a useful indicator of the ecological footprint of consuming farmed fish. The lowest lipid content appeared in meagre (1,7%) and the highest in trout (4,91%). The lowest fillet yield and fillet yield feed conversion ratio (FYFCR) was in meagre (FY=42,17%, FFCR=2,48), the best fillet yield (FY=53,8%) and FYFCR (2,10) was exhibited in farmed rainbow trout. This research has been co-financed by the European Union (European Social Fund – ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: ARCHIMEDES III. Investing in knowledge society through the European Social Fund.Keywords: farmed fish, flesh quality, filleting yield, lipid
Procedia PDF Downloads 308731 An Approximate Formula for Calculating the Fundamental Mode Period of Vibration of Practical Building
Authors: Abdul Hakim Chikho
Abstract:
Most international codes allow the use of an equivalent lateral load method for designing practical buildings to withstand earthquake actions. This method requires calculating an approximation to the fundamental mode period of vibrations of these buildings. Several empirical equations have been suggested to calculate approximations to the fundamental periods of different types of structures. Most of these equations are knowing to provide an only crude approximation to the required fundamental periods and repeating the calculation utilizing a more accurate formula is usually required. In this paper, a new formula to calculate a satisfactory approximation of the fundamental period of a practical building is proposed. This formula takes into account the mass and the stiffness of the building therefore, it is more logical than the conventional empirical equations. In order to verify the accuracy of the proposed formula, several examples have been solved. In these examples, calculating the fundamental mode periods of several farmed buildings utilizing the proposed formula and the conventional empirical equations has been accomplished. Comparing the obtained results with those obtained from a dynamic computer has shown that the proposed formula provides a more accurate estimation of the fundamental periods of practical buildings. Since the proposed method is still simple to use and requires only a minimum computing effort, it is believed to be ideally suited for design purposes.Keywords: earthquake, fundamental mode period, design, building
Procedia PDF Downloads 281730 Verification of Simulated Accumulated Precipitation
Authors: Nato Kutaladze, George Mikuchadze, Giorgi Sokhadze
Abstract:
Precipitation forecasts are one of the most demanding applications in numerical weather prediction (NWP). Georgia, as the whole Caucasian region, is characterized by very complex topography. The country territory is prone to flash floods and mudflows, quantitative precipitation estimation (QPE) and quantitative precipitation forecast (QPF) at any leading time are very important for Georgia. In this study, advanced research weather forecasting model’s skill in QPF is investigated over Georgia’s territory. We have analyzed several convection parameterization and microphysical scheme combinations for different rainy episodes and heavy rainy phenomena. We estimate errors and biases in accumulated 6 h precipitation using different spatial resolution during model performance verification for 12-hour and 24-hour lead time against corresponding rain gouge observations and satellite data. Various statistical parameters have been calculated for the 8-month comparison period, and some skills of model simulation have been evaluated. Our focus is on the formation and organization of convective precipitation systems in a low-mountain region. Several problems in connection with QPF have been identified for mountain regions, which include the overestimation and underestimation of precipitation on the windward and lee side of the mountains, respectively, and a phase error in the diurnal cycle of precipitation leading to the onset of convective precipitation in model forecasts several hours too early.Keywords: extremal dependence index, false alarm, numerical weather prediction, quantitative precipitation forecasting
Procedia PDF Downloads 146729 Effect of Progressive Type-I Right Censoring on Bayesian Statistical Inference of Simple Step–Stress Acceleration Life Testing Plan under Weibull Life Distribution
Authors: Saleem Z. Ramadan
Abstract:
This paper discusses the effects of using progressive Type-I right censoring on the design of the Simple Step Accelerated Life testing using Bayesian approach for Weibull life products under the assumption of cumulative exposure model. The optimization criterion used in this paper is to minimize the expected pre-posterior variance of the PTH percentile time of failures. The model variables are the stress changing time and the stress value for the first step. A comparison between the conventional and the progressive Type-I right censoring is provided. The results have shown that the progressive Type-I right censoring reduces the cost of testing on the expense of the test precision when the sample size is small. Moreover, the results have shown that using strong priors or large sample size reduces the sensitivity of the test precision to the censoring proportion. Hence, the progressive Type-I right censoring is recommended in these cases as progressive Type-I right censoring reduces the cost of the test and doesn't affect the precision of the test a lot. Moreover, the results have shown that using direct or indirect priors affects the precision of the test.Keywords: reliability, accelerated life testing, cumulative exposure model, Bayesian estimation, progressive type-I censoring, Weibull distribution
Procedia PDF Downloads 503728 Effect of Bonded and Removable Retainers on Occlusal Settling after Orthodontic Treatment: A Systematic Review and Meta-Analysis
Authors: Umair Shoukat Ali, Kamil Zafar, Rashna Hoshang Sukhia, Mubassar Fida, Aqeel Ahmed
Abstract:
Objective: This systematic review and meta-analysis aimed to summarize the effectiveness of bonded and removable retainers (Hawley and Essix retainer) in terms of improvement in occlusal settling (occlusal contact points/areas) after orthodontic treatment. Search Method: We searched the Cochrane Library, CINAHL Plus, PubMed, Web of Science, Orthodontic journals, and Google scholar for eligible studies. We included randomized control trials (RCT) along with Cohort studies. Studies that reported occlusal contacts/areas during retention with fixed bonded and removable retainers were included. To assess the quality of the RCTs Cochrane risk of bias tool was utilized, whereas Newcastle-Ottawa Scale was used for assessing the quality of cohort studies. Data analysis: The data analysis was limited to reporting mean values of occlusal contact points/areas with different retention methods. By utilizing the RevMan software V.5.3, a meta-analysis was performed for all the studies with the quantitative data. For the computation of the summary effect, a random effect model was utilized in case of high heterogeneity. I2 statistics were utilized to assess the heterogeneity among the selected studies. Results: We included 6 articles in our systematic review after scrutinizing 219 articles and eliminating them based on duplication, titles, and objectives. We found significant differences between fixed and removable retainers in terms of occlusal settling within the included studies. Bonded retainer (BR) allowed faster and better posterior tooth settling as compared to Hawley retainer (HR). However, HR showed good occlusal settling in the anterior dental arch. Essix retainer showed a decrease in occlusal contact during the retention phase. Meta-analysis showed no statistically significant difference between BR and removable retainers. Conclusions: HR allowed better overall occlusal settling as compared to other retainers in comparison. However, BR allowed faster settling in the posterior teeth region. Overall, there are insufficient high-quality RCTs to provide additional evidence, and further high-quality RCTs research is needed.Keywords: orthodontic retainers, occlusal contact, Hawley, fixed, vacuum-formed
Procedia PDF Downloads 121727 Compression Index Estimation by Water Content and Liquid Limit and Void Ratio Using Statistics Method
Authors: Lizhou Chen, Abdelhamid Belgaid, Assem Elsayed, Xiaoming Yang
Abstract:
Compression index is essential in foundation settlement calculation. The traditional method for determining compression index is consolidation test which is expensive and time consuming. Many researchers have used regression methods to develop empirical equations for predicting compression index from soil properties. Based on a large number of compression index data collected from consolidation tests, the accuracy of some popularly empirical equations were assessed. It was found that primary compression index is significantly overestimated in some equations while it is underestimated in others. The sensitivity analyses of soil parameters including water content, liquid limit and void ratio were performed. The results indicate that the compression index obtained from void ratio is most accurate. The ANOVA (analysis of variance) demonstrates that the equations with multiple soil parameters cannot provide better predictions than the equations with single soil parameter. In other words, it is not necessary to develop the relationships between compression index and multiple soil parameters. Meanwhile, it was noted that secondary compression index is approximately 0.7-5.0% of primary compression index with an average of 2.0%. In the end, the proposed prediction equations using power regression technique were provided that can provide more accurate predictions than those from existing equations.Keywords: compression index, clay, settlement, consolidation, secondary compression index, soil parameter
Procedia PDF Downloads 158726 Alveolar Ridge Preservation in Post-extraction Sockets Using Concentrated Growth Factors: A Split-Mouth, Randomized, Controlled Clinical Trial
Authors: Sadam Elayah
Abstract:
Background: One of the most critical competencies in advanced dentistry is alveolar ridge preservation after exodontia. The aim of this clinical trial was to assess the impact of autologous concentrated growth factor (CGF) as a socket-filling material and its ridge preservation properties following the lower third molar extraction. Materials and Methods: A total of 60 sides of 30 participants who had completely symmetrical bilateral impacted lower third molars were enrolled. The short-term outcome variables were wound healing, swelling and pain, clinically assessed at different time intervals (1st, 3rd & 7th days). While the long-term outcome variables were bone height & width, bone density and socket surface area in the coronal section. Cone beam computed tomography images were obtained immediately after surgery and three months after surgery as a temporal measure. Randomization was achieved by opaque, sealed envelopes. Follow-up data were compared to baseline using Paired & Unpaired t-tests. Results: The wound healing index was significantly better in the test sides (P =0.001). Regarding the facial swelling, the test sides had significantly fewer values than the control sides, particularly on the 1st (1.01±.57 vs 1.55 ±.56) and 3rd days (1.42±0.8 vs 2.63±1.2) postoperatively. Nonetheless, the swelling disappeared within the 7th day on both sides. The pain scores of the visual analog scale were not a statistically significant difference between both sides on the 1st day; meanwhile, the pain scores were significantly lower on the test sides compared with the control sides, especially on the 3rd (P=0.001) and 7th days (P˂0.001) postoperatively. Regarding long-term outcomes, CGF sites had higher values in height and width when compared to Control sites (Buccal wall 32.9±3.5 vs 29.4±4.3 mm, Lingual wall 25.4±3.5 vs 23.1±4 mm, and Alveolar bone width 21.07±1.55vs19.53±1.90 mm) respectively. Bone density showed significantly higher values in CGF sites than in control sites (Coronal half 200±127.3 vs -84.1±121.3, Apical half 406.5±103 vs 64.2±158.6) respectively. There was a significant difference between both sites in reducing periodontal pockets. Conclusion: CGF application following surgical extraction provides an easy, low-cost, and efficient option for alveolar ridge preservation. Thus, dentists may encourage using CGF during dental extractions, particularly when alveolar ridge preservation is required.Keywords: platelet, extraction, impacted teeth, alveolar ridge, regeneration, CGF
Procedia PDF Downloads 65725 The Cartometric-Geographical Analysis of Ivane Javakhishvili 1922: The Map of the Republic of Georgia
Authors: Manana Kvetenadze, Dali Nikolaishvili
Abstract:
The study revealed the territorial changes of Georgia before the Soviet and Post-Soviet periods. This includes the estimation of the country's borders, its administrative-territorial arrangement change as well as the establishment of territorial losses. Georgia’s old and new borders marked on the map are of great interest. The new boundary shows the condition of 1922 year, following the Soviet period. Neither on this map nor in other works Ivane Javakhishvili talks about what he implies in the old borders, though it is evident that this is the Pre-Soviet boundary until 1921 – i.e., before the period when historical Tao, Zaqatala, Lore, Karaia represented the parts of Georgia. According to cartometric-geographical terms, the work presents detailed analysis of Georgia’s borders, along with this the comparison of research results has been carried out: 1) At the boundary line on Soviet topographic maps, the maps of 100,000; 50,000 and 25,000 scales are used; 2) According to Ivane Javakhishvili’s work ('The borders of Georgia in terms of historical and contemporary issues'). During that research, we used multi-disciplined methodology and software. We used Arc GIS for Georeferencing maps, and after that, we compare all post-Soviet Union maps, in order to determine how the borders have changed. During this work, we also use many historical data. The features of the spatial distribution of the territorial administrative units of Georgia, as well as the distribution of administrative-territorial units of the objects depicted on the map, have been established. The results obtained are presented in the forms of thematic maps and diagrams.Keywords: border, GIS, georgia, historical cartography, old maps
Procedia PDF Downloads 241724 Identification Strategies for Unknown Victims from Mass Disasters and Unknown Perpetrators from Violent Crime or Terrorist Attacks
Authors: Michael Josef Schwerer
Abstract:
Background: The identification of unknown victims from mass disasters, violent crimes, or terrorist attacks is frequently facilitated through information from missing persons lists, portrait photos, old or recent pictures showing unique characteristics of a person such as scars or tattoos, or simply reference samples from blood relatives for DNA analysis. In contrast, the identification or at least the characterization of an unknown perpetrator from criminal or terrorist actions remains challenging, particularly in the absence of material or data for comparison, such as fingerprints, which had been previously stored in criminal records. In scenarios that result in high levels of destruction of the perpetrator’s corpse, for instance, blast or fire events, the chance for a positive identification using standard techniques is further impaired. Objectives: This study shows the forensic genetic procedures in the Legal Medicine Service of the German Air Force for the identification of unknown individuals, including such cases in which reference samples are not available. Scenarios requiring such efforts predominantly involve aircraft crash investigations, which are routinely carried out by the German Air Force Centre of Aerospace Medicine as one of the Institution’s essential missions. Further, casework by military police or military intelligence is supported based on administrative cooperation. In the talk, data from study projects, as well as examples from real casework, will be demonstrated and discussed with the audience. Methods: Forensic genetic identification in our laboratories involves the analysis of Short Tandem Repeats and Single Nucleotide Polymorphisms in nuclear DNA along with mitochondrial DNA haplotyping. Extended DNA analysis involves phenotypic markers for skin, hair, and eye color together with the investigation of a person’s biogeographic ancestry. Assessment of the biological age of an individual employs CpG-island methylation analysis using bisulfite-converted DNA. Forensic Investigative Genealogy assessment allows the detection of an unknown person’s blood relatives in reference databases. Technically, end-point-PCR, real-time PCR, capillary electrophoresis, pyrosequencing as well as next generation sequencing using flow-cell-based and chip-based systems are used. Results and Discussion: Optimization of DNA extraction from various sources, including difficult matrixes like formalin-fixed, paraffin-embedded tissues, degraded specimens from decomposed bodies or from decedents exposed to blast or fire events, provides soil for successful PCR amplification and subsequent genetic profiling. For cases with extremely low yields of extracted DNA, whole genome preamplification protocols are successfully used, particularly regarding genetic phenotyping. Improved primer design for CpG-methylation analysis, together with validated sampling strategies for the analyzed substrates from, e.g., lymphocyte-rich organs, allows successful biological age estimation even in bodies with highly degraded tissue material. Conclusions: Successful identification of unknown individuals or at least their phenotypic characterization using pigmentation markers together with age-informative methylation profiles, possibly supplemented by family tree search employing Forensic Investigative Genealogy, can be provided in specialized laboratories. However, standard laboratory procedures must be adapted to work with difficult and highly degraded sample materials.Keywords: identification, forensic genetics, phenotypic markers, CPG methylation, biological age estimation, forensic investigative genealogy
Procedia PDF Downloads 47723 Enhanced Calibration Map for a Four-Hole Probe for Measuring High Flow Angles
Authors: Jafar Mortadha, Imran Qureshi
Abstract:
This research explains and compares the modern techniques used for measuring the flow angles of a flowing fluid with the traditional technique of using multi-hole pressure probes. In particular, the focus of the study is on four-hole probes, which offer great reliability and benefits in several applications where the use of modern measurement techniques is either inconvenient or impractical. Due to modern advancements in manufacturing, small multi-hole pressure probes can be made with high precision, which eliminates the need for calibrating every manufactured probe. This study aims to improve the range of calibration maps for a four-hole probe to allow high flow angles to be measured accurately. The research methodology comprises a literature review of the successful calibration definitions that have been implemented on five-hole probes. These definitions are then adapted and applied on a four-hole probe using a set of raw pressures data. A comparison of the different definitions will be carried out in Matlab and the results will be analyzed to determine the best calibration definition. Taking simplicity of implementation into account as well as the reliability of flow angles estimation, an adapted technique from a research paper written in 2002 offered the most promising outcome. Consequently, the method is seen as a good enhancement for four-hole probes and it can substitute for the existing calibration definitions that offer less accuracy.Keywords: calibration definitions, calibration maps, flow measurement techniques, four-hole probes, multi-hole pressure probes
Procedia PDF Downloads 294722 Well-Being Inequality Using Superimposing Satisfaction Waves: Heisenberg Uncertainty in Behavioral Economics and Econometrics
Authors: Okay Gunes
Abstract:
In this article, for the first time in the literature for this subject we propose a new method for the measuring of well-being inequality through a model composed of superimposing satisfaction waves. The displacement of households’ satisfactory state (i.e. satisfaction) is defined in a satisfaction string. The duration of the satisfactory state for a given period of time is measured in order to determine the relationship between utility and total satisfactory time, itself dependent on the density and tension of each satisfaction string. Thus, individual cardinal total satisfaction values are computed by way of a one-dimensional form for scalar sinusoidal (harmonic) moving wave function, using satisfaction waves with varying amplitudes and frequencies which allow us to measure well-being inequality. One advantage to using satisfaction waves is the ability to show that individual utility and consumption amounts would probably not commute; hence it is impossible to measure or to know simultaneously the values of these observables from the dataset. Thus, we crystallize the problem by using a Heisenberg-type uncertainty resolution for self-adjoint economic operators. We propose to eliminate any estimation bias by correlating the standard deviations of selected economic operators; this is achieved by replacing the aforementioned observed uncertainties with households’ perceived uncertainties (i.e. corrected standard deviations) obtained through the logarithmic psychophysical law proposed by Weber and Fechner.Keywords: Heisenberg uncertainty principle, superimposing satisfaction waves, Weber–Fechner law, well-being inequality
Procedia PDF Downloads 438721 Simulation of Improving the Efficiency of a Fire-Tube Steam Boiler
Authors: Roudane Mohamed
Abstract:
In this study we are interested in improving the efficiency of a steam boiler to 4.5T/h and minimize fume discharge temperature by the addition of a heat exchanger against the current in the energy system, the output of the boiler. The mathematical approach to the problem is based on the use of heat transfer by convection and conduction equations. These equations have been chosen because of their extensive use in a wide range of application. A software and developed for solving the equations governing these phenomena and the estimation of the thermal characteristics of boiler through the study of the thermal characteristics of the heat exchanger by both LMTD and NUT methods. Subsequently, an analysis of the thermal performance of the steam boiler by studying the influence of different operating parameters on heat flux densities, temperatures, exchanged power and performance was carried out. The study showed that the behavior of the boiler is largely influenced. In the first regime (P = 3.5 bar), the boiler efficiency has improved significantly from 93.03 to 99.43 at the rate of 6.47% and 4.5%. For maximum speed, the change is less important, it is of the order of 1.06%. The results obtained in this study of great interest to industrial utilities equipped with smoke tube boilers for the preheating air temperature intervene to calculate the actual temperature of the gas so the heat exchanged will be increased and minimize temperature smoke discharge. On the other hand, this work could be used as a model of computation in the design process.Keywords: numerical simulation, efficiency, fire tube, heat exchanger, convection and conduction
Procedia PDF Downloads 216720 Sidelobe Free Inverse Synthetic Aperture Radar Imaging of Non Cooperative Moving Targets Using WiFi
Authors: Jiamin Huang, Shuliang Gui, Zengshan Tian, Fei Yan, Xiaodong Wu
Abstract:
In recent years, with the rapid development of radio frequency technology, the differences between radar sensing and wireless communication in terms of receiving and sending channels, signal processing, data management and control are gradually shrinking. There has been a trend of integrated communication radar sensing. However, most of the existing radar imaging technologies based on communication signals are combined with synthetic aperture radar (SAR) imaging, which does not conform to the practical application case of the integration of communication and radar. Therefore, in this paper proposes a high-precision imaging method using communication signals based on the imaging mechanism of inverse synthetic aperture radar (ISAR) imaging. This method makes full use of the structural characteristics of the orthogonal frequency division multiplexing (OFDM) signal, so the sidelobe effect in distance compression is removed and combines radon transform and Fractional Fourier Transform (FrFT) parameter estimation methods to achieve ISAR imaging of non-cooperative targets. The simulation experiment and measured results verify the feasibility and effectiveness of the method, and prove its broad application prospects in the field of intelligent transportation.Keywords: integration of communication and radar, OFDM, radon, FrFT, ISAR
Procedia PDF Downloads 123719 The Characteristics of Quantity Operation for 2nd and 3rd Grade Mathematics Slow Learners
Authors: Pi-Hsia Hung
Abstract:
The development of mathematical competency has individual benefits as well as benefits to the wider society. Children who begin school behind their peers in their understanding of number, counting, and simple arithmetic are at high risk of staying behind throughout their schooling. The development of effective strategies for improving the educational trajectory of these individuals will be contingent on identifying areas of early quantitative knowledge that influence later mathematics achievement. A computer-based quantity assessment was developed in this study to investigate the characteristics of 2nd and 3rd grade slow learners in quantity. The concept of quantification involves understanding measurements, counts, magnitudes, units, indicators, relative size, and numerical trends and patterns. Fifty-five tasks of quantitative reasoning—such as number sense, mental calculation, estimation and assessment of reasonableness of results—are included as quantity problem solving. Thus, quantity is defined in this study as applying knowledge of number and number operations in a wide variety of authentic settings. Around 1000 students were tested and categorized into 4 different performance levels. Students’ quantity ability correlated higher with their school math grade than other subjects. Around 20% students are below basic level. The intervention design implications of the preliminary item map constructed are discussed.Keywords: mathematics assessment, mathematical cognition, quantity, number sense, validity
Procedia PDF Downloads 245718 The Data-Driven Localized Wave Solution of the Fokas-Lenells Equation Using Physics-Informed Neural Network
Authors: Gautam Kumar Saharia, Sagardeep Talukdar, Riki Dutta, Sudipta Nandy
Abstract:
The physics-informed neural network (PINN) method opens up an approach for numerically solving nonlinear partial differential equations leveraging fast calculating speed and high precession of modern computing systems. We construct the PINN based on a strong universal approximation theorem and apply the initial-boundary value data and residual collocation points to weekly impose initial and boundary conditions to the neural network and choose the optimization algorithms adaptive moment estimation (ADAM) and Limited-memory Broyden-Fletcher-Golfard-Shanno (L-BFGS) algorithm to optimize learnable parameter of the neural network. Next, we improve the PINN with a weighted loss function to obtain both the bright and dark soliton solutions of the Fokas-Lenells equation (FLE). We find the proposed scheme of adjustable weight coefficients into PINN has a better convergence rate and generalizability than the basic PINN algorithm. We believe that the PINN approach to solve the partial differential equation appearing in nonlinear optics would be useful in studying various optical phenomena.Keywords: deep learning, optical soliton, physics informed neural network, partial differential equation
Procedia PDF Downloads 69