Search results for: point estimation
5980 Three-Dimensional CFD Modeling of Flow Field and Scouring around Bridge Piers
Authors: P. Deepak Kumar, P. R. Maiti
Abstract:
In recent years, sediment scour near bridge piers and abutment is a serious problem which causes nationwide concern because it has resulted in more bridge failures than other causes. Scour is the formation of scour hole around the structure mounted on and embedded in erodible channel bed due to the erosion of soil by flowing water. The formation of scour hole around the structures depends upon shape and size of the pier, depth of flow as well as angle of attack of flow and sediment characteristics. The flow characteristics around these structures change due to man-made obstruction in the natural flow path which changes the kinetic energy of the flow around these structures. Excessive scour affects the stability of the foundation of the structure by the removal of the bed material. The accurate estimation of scour depth around bridge pier is very difficult. The foundation of bridge piers have to be taken deeper and to provide sufficient anchorage length required for stability of the foundation. In this study, computational model simulations using a 3D Computational Fluid Dynamics (CFD) model were conducted to examine the mechanism of scour around a cylindrical pier. Subsequently, the flow characteristics around these structures are presented for different flow conditions. Mechanism of scouring phenomenon, the formation of vortex and its consequent effect is discussed for a straight channel. Effort was made towards estimation of scour depth around bridge piers under different flow conditions.Keywords: bridge pier, computational fluid dynamics, multigrid, pier shape, scour
Procedia PDF Downloads 2965979 Application of Response Surface Methodology to Optimize the Thermal Conductivity Enhancement of a Hybrid Nanofluid
Authors: Aminreza Noghrehabadi, Mohammad Behbahani, Ali Pourabbasi
Abstract:
In this experimental work, unlike conventional methods that mix two nanoparticles together, silver nanoparticles have been synthesized on the surface of graphene. In this research, the effect of adding modified graphene nanocomposite-silver nanoparticles to the base fluid (distilled water) was studied. Different transmission electron microscopy (TEM) and field emission scanning electron microscope (FESEM) techniques have been used to examine the surfaces and atomic structure of nanoparticles. An ultrasonic device has been used to disperse the nanocomposite in distilled water. Also, the thermal conductivity coefficient was measured by the transient hot wire method using the KD2-pro device. In addition, the thermal conductivity coefficient was measured in the temperature range of 30°C to 50°C, concentration of 10 ppm to 1000 ppm, and ultrasonic time of 2 minutes to 15 minutes. The results showed that with the increase of all three parameters of temperature, concentration and ultrasonic time, the percentage of increase in thermal conductivity will go up until reaching the optimal point, and after passing the optimal point, the percentage of increase in thermal conductivity will have a downward trend. To calculate the thermal conductivity of this nanofluid, a very accurate experimental equation has been obtained using Design Expert software.Keywords: thermal conductivity, nanofluids, enhancement, silver nano particle, optimal point
Procedia PDF Downloads 885978 REDUCER: An Architectural Design Pattern for Reducing Large and Noisy Data Sets
Authors: Apkar Salatian
Abstract:
To relieve the burden of reasoning on a point to point basis, in many domains there is a need to reduce large and noisy data sets into trends for qualitative reasoning. In this paper we propose and describe a new architectural design pattern called REDUCER for reducing large and noisy data sets that can be tailored for particular situations. REDUCER consists of 2 consecutive processes: Filter which takes the original data and removes outliers, inconsistencies or noise; and Compression which takes the filtered data and derives trends in the data. In this seminal article, we also show how REDUCER has successfully been applied to 3 different case studies.Keywords: design pattern, filtering, compression, architectural design
Procedia PDF Downloads 2125977 Comparative Analysis of Various Waste Oils for Biodiesel Production
Authors: Olusegun Ayodeji Olagunju, Christine Tyreesa Pillay
Abstract:
Biodiesel from waste sources is regarded as an economical and most viable fuel alternative to depleting fossil fuels. In this work, biodiesel was produced from three different sources of waste cooking oil; from cafeterias, which is vegetable-based using the transesterification method. The free fatty acids (% FFA) of the feedstocks were conducted successfully through the titration method. The results for sources 1, 2, and 3 were 0.86 %, 0.54 % and 0.20 %, respectively. The three variables considered in this process were temperature, reaction time, and catalyst concentration within the following range: 50 oC – 70 oC, 30 min – 90 min, and 0.5 % – 1.5 % catalyst. Produced biodiesel was characterized using ASTM standard methods for biodiesel property testing to determine the fuel properties, including kinematic viscosity, specific gravity, flash point, pour point, cloud point, and acid number. The results obtained indicate that the biodiesel yield from source 3 was greater than the other sources. All produced biodiesel fuel properties are within the standard biodiesel fuel specifications ASTM D6751. The optimum yield of biodiesel was obtained at 98.76%, 96.4%, and 94.53% from source 3, source 2, and source 1, respectively at optimum operating variables of 65 oC temperature, 90 minutes reaction time, and 0.5 wt% potassium hydroxide.Keywords: waste cooking oil, biodiesel, free fatty acid content, potassium hydroxide catalyst, optimization analysis
Procedia PDF Downloads 775976 Experimental Investigation of Cutting Forces and Temperature in Bone Drilling
Authors: Vishwanath Mali, Hemant Warhatkar, Raju Pawade
Abstract:
Drilling of bone has been always challenging for surgeons due to the adverse effect it may impart to bone tissues. Force has to be applied manually by the surgeon while performing conventional bone drilling which may lead to permanent death of bone tissues and nerves. During bone drilling the temperature of the bone tissues increases to higher values above 47 ⁰C that causes thermal osteonecrosis resulting into screw loosening and subsequent implant failures. An attempt has been made here to study the input drilling parameters and surgical drill bit geometry affecting bone health during bone drilling. A One Factor At a Time (OFAT) method is used to plan the experiments. Input drilling parameters studied include spindle speed and feed rate. The drill bit geometry parameter studied include point angle and helix angle. The output variables are drilling thrust force and bone temperature. The experiments were conducted on goat femur bone at room temperature 30 ⁰C. For measurement of thrust forces KISTLER cutting force dynamometer Type 9257BA was used. For continuous data acquisition of temperature NI LabVIEW software was used. Fixture was made on RPT machine for holding the bone specimen while performing drilling operation. Bone specimen were preserved in deep freezer (LABTOP make) under -40 ⁰C. In case of drilling parameters, it is observed that at constant feed rate when spindle speed increases, thrust force as well as temperature decreases and at constant spindle speed when feed rate increases thrust force as well as temperature increases. The effect of drill bit geometry shows that at constant helix angle when point angle increases thrust force as well as temperature increases and at constant point angle when helix angle increase thrust force as well as temperature decreases. Hence it is concluded that as the thrust force increases temperature increases. In case of drilling parameter, the lowest thrust force and temperature i.e. 35.55 N and 36.04 ⁰C respectively were recorded at spindle speed 2000 rpm and feed rate 0.04 mm/rev. In case of drill bit geometry parameter, the lowest thrust force and temperature i.e. 40.81 N and 34 ⁰C respectively were recorded at point angle 70⁰ and helix angle 25⁰ Hence to avoid thermal necrosis of bone it is recommended to use higher spindle speed, lower feed rate, low point angle and high helix angle. The hard nature of cortical bone contributes to a greater rise in temperature whereas a considerable drop in temperature is observed during cancellous bone drilling.Keywords: bone drilling, helix angle, point angle, thrust force, temperature, thermal necrosis
Procedia PDF Downloads 3095975 Three-Dimensional Positioning Method of Indoor Personnel Based on Millimeter Wave Radar Sensor
Authors: Chao Wang, Zuxue Xia, Wenhai Xia, Rui Wang, Jiayuan Hu, Rui Cheng
Abstract:
Aiming at the application of indoor personnel positioning under smog conditions, this paper proposes a 3D positioning method based on the IWR1443 millimeter wave radar sensor. The problem that millimeter-wave radar cannot effectively form contours in 3D point cloud imaging is solved. The results show that the method can effectively achieve indoor positioning and scene construction, and the maximum positioning error of the system is 0.130m.Keywords: indoor positioning, millimeter wave radar, IWR1443 sensor, point cloud imaging
Procedia PDF Downloads 1125974 Role of Water Supply in the Functioning of the MLDB Systems
Authors: Ramanpreet Kaur, Upasana Sharma
Abstract:
The purpose of this paper is to address the challenges faced by MLDB system at the piston foundry plant due to interruption in supply of water. For the MLDB system to work in Model, two sub-units must be connected to the robotic main unit. The system cannot function without robotics and water supply by the fan (WSF). Insufficient water supply is the cause of system failure. The system operates at top performance using two sub-units. If one sub-unit fails, the system capacity is reduced. Priority of repair is given to the main unit i.e. Robotic and WSF. To solve the problem, semi-Markov process and regenerative point technique are used. Relevant graphs are also included to particular case.Keywords: MLDB system, robotic, semi-Markov process, regenerative point technique
Procedia PDF Downloads 775973 Poverty Dynamics in Thailand: Evidence from Household Panel Data
Authors: Nattabhorn Leamcharaskul
Abstract:
This study aims to examine determining factors of the dynamics of poverty in Thailand by using panel data of 3,567 households in 2007-2017. Four techniques of estimation are employed to analyze the situation of poverty across households and time periods: the multinomial logit model, the sequential logit model, the quantile regression model, and the difference in difference model. Households are categorized based on their experiences into 5 groups, namely chronically poor, falling into poverty, re-entering into poverty, exiting from poverty and never poor households. Estimation results emphasize the effects of demographic and socioeconomic factors as well as unexpected events on the economic status of a household. It is found that remittances have positive impact on household’s economic status in that they are likely to lower the probability of falling into poverty or trapping in poverty while they tend to increase the probability of exiting from poverty. In addition, not only receiving a secondary source of household income can raise the probability of being a never poor household, but it also significantly increases household income per capita of the chronically poor and falling into poverty households. Public work programs are recommended as an important tool to relieve household financial burden and uncertainty and thus consequently increase a chance for households to escape from poverty.Keywords: difference in difference, dynamic, multinomial logit model, panel data, poverty, quantile regression, remittance, sequential logit model, Thailand, transfer
Procedia PDF Downloads 1125972 Algorithms for Computing of Optimization Problems with a Common Minimum-Norm Fixed Point with Applications
Authors: Apirak Sombat, Teerapol Saleewong, Poom Kumam, Parin Chaipunya, Wiyada Kumam, Anantachai Padcharoen, Yeol Je Cho, Thana Sutthibutpong
Abstract:
This research is aimed to study a two-step iteration process defined over a finite family of σ-asymptotically quasi-nonexpansive nonself-mappings. The strong convergence is guaranteed under the framework of Banach spaces with some additional structural properties including strict and uniform convexity, reflexivity, and smoothness assumptions. With similar projection technique for nonself-mapping in Hilbert spaces, we hereby use the generalized projection to construct a point within the corresponding domain. Moreover, we have to introduce the use of duality mapping and its inverse to overcome the unavailability of duality representation that is exploit by Hilbert space theorists. We then apply our results for σ-asymptotically quasi-nonexpansive nonself-mappings to solve for ideal efficiency of vector optimization problems composed of finitely many objective functions. We also showed that the obtained solution from our process is the closest to the origin. Moreover, we also give an illustrative numerical example to support our results.Keywords: asymptotically quasi-nonexpansive nonself-mapping, strong convergence, fixed point, uniformly convex and uniformly smooth Banach space
Procedia PDF Downloads 2605971 Efficacy of Computer Mediated Power Point Presentations on Students' Learning Outcomes in Basic Science in Oyo State, Nigeria
Authors: Sunmaila Oyetunji Raimi, Olufemi Akinloye Bolaji, Abiodun Ezekiel Adesina
Abstract:
The lingering poor performance of students in basic science spells doom for a vibrant scientific and technological development which pivoted the economic, social and physical upliftment of any nation. This calls for identifying appropriate strategies for imparting basic science knowledge and attitudes to the teaming youths in secondary schools. This study, therefore, determined the impact of computer mediated power point presentations on students’ achievement in basic science in Oyo State, Nigeria. A pre-test, posttest, control group quazi-experimental design adopted for the study. Two hundred and five junior secondary two students selected using stratified random sampling technique participated in the study. Three research questions and three hypotheses guided the study. Two evaluative instruments – Students’ Basic Science Attitudes Scale (SBSAS, r = 0.91); Students’ Knowledge of Basic Science Test (SKBST, r = 0.82) were used for data collection. Descriptive statistics of mean, standard deviation and inferential statistics of ANCOVA, scheffe post-hoc test were used to analyse the data. The results indicated significant main effect of treatment on students cognitive (F(1,200)= 171.680; p < 0.05) and attitudinal (F(1,200)= 34.466; p < 0.05) achievement in Basic science with the experimental group having higher mean gain than the control group. Gender has significant main effect (F(1,200)= 23.382; p < 0.05) on students cognitive outcomes but not significant for attitudinal achievement in Basic science. The study therefore recommended among others that computer mediated power point presentations should be incorporated into curriculum methodology of Basic science in secondary schools.Keywords: basic science, computer mediated power point presentations, gender, students’ achievement
Procedia PDF Downloads 4295970 Motion Planning and Posture Control of the General 3-Trailer System
Authors: K. Raghuwaiya, B. Sharma, J. Vanualailai
Abstract:
This paper presents a set of artificial potential field functions that improves upon; in general, the motion planning and posture control, with theoretically guaranteed point and posture stabilities, convergence and collision avoidance properties of the general 3-trailer system in a priori known environment. We basically design and inject two new concepts; ghost walls and the distance optimization technique (DOT) to strengthen point and posture stabilities, in the sense of Lyapunov, of our dynamical model. This new combination of techniques emerges as a convenient mechanism for obtaining feasible orientations at the target positions with an overall reduction in the complexity of the navigation laws. Simulations are provided to demonstrate the effectiveness of the controls laws.Keywords: artificial potential fields, 3-trailer systems, motion planning, posture
Procedia PDF Downloads 4265969 Instant Location Detection of Objects Moving at High Speed in C-OTDR Monitoring Systems
Authors: Andrey V. Timofeev
Abstract:
The practical efficient approach is suggested to estimate the high-speed objects instant bounds in C-OTDR monitoring systems. In case of super-dynamic objects (trains, cars) is difficult to obtain the adequate estimate of the instantaneous object localization because of estimation lag. In other words, reliable estimation coordinates of monitored object requires taking some time for data observation collection by means of C-OTDR system, and only if the required sample volume will be collected the final decision could be issued. But it is contrary to requirements of many real applications. For example, in rail traffic management systems we need to get data off the dynamic objects localization in real time. The way to solve this problem is to use the set of statistical independent parameters of C-OTDR signals for obtaining the most reliable solution in real time. The parameters of this type we can call as 'signaling parameters' (SP). There are several the SP’s which carry information about dynamic objects instant localization for each of C-OTDR channels. The problem is that some of these parameters are very sensitive to dynamics of seismoacoustic emission sources but are non-stable. On the other hand, in case the SP is very stable it becomes insensitive as a rule. This report contains describing the method for SP’s co-processing which is designed to get the most effective dynamic objects localization estimates in the C-OTDR monitoring system framework.Keywords: C-OTDR-system, co-processing of signaling parameters, high-speed objects localization, multichannel monitoring systems
Procedia PDF Downloads 4705968 Developing Allometric Equations for More Accurate Aboveground Biomass and Carbon Estimation in Secondary Evergreen Forests, Thailand
Authors: Titinan Pothong, Prasit Wangpakapattanawong, Stephen Elliott
Abstract:
Shifting cultivation is an indigenous agricultural practice among upland people and has long been one of the major land-use systems in Southeast Asia. As a result, fallows and secondary forests have come to cover a large part of the region. However, they are increasingly being replaced by monocultures, such as corn cultivation. This is believed to be a main driver of deforestation and forest degradation, and one of the reasons behind the recurring winter smog crisis in Thailand and around Southeast Asia. Accurate biomass estimation of trees is important to quantify valuable carbon stocks and changes to these stocks in case of land use change. However, presently, Thailand lacks proper tools and optimal equations to quantify its carbon stocks, especially for secondary evergreen forests, including fallow areas after shifting cultivation and smaller trees with a diameter at breast height (DBH) of less than 5 cm. Developing new allometric equations to estimate biomass is urgently needed to accurately estimate and manage carbon storage in tropical secondary forests. This study established new equations using a destructive method at three study sites: approximately 50-year-old secondary forest, 4-year-old fallow, and 7-year-old fallow. Tree biomass was collected by harvesting 136 individual trees (including coppiced trees) from 23 species, with a DBH ranging from 1 to 31 cm. Oven-dried samples were sent for carbon analysis. Wood density was calculated from disk samples and samples collected with an increment borer from 79 species, including 35 species currently missing from the Global Wood Densities database. Several models were developed, showing that aboveground biomass (AGB) was strongly related to DBH, height (H), and wood density (WD). Including WD in the model was found to improve the accuracy of the AGB estimation. This study provides insights for reforestation management, and can be used to prepare baseline data for Thailand’s carbon stocks for the REDD+ and other carbon trading schemes. These may provide monetary incentives to stop illegal logging and deforestation for monoculture.Keywords: aboveground biomass, allometric equation, carbon stock, secondary forest
Procedia PDF Downloads 2845967 Schrödinger Equation with Position-Dependent Mass: Staggered Mass Distributions
Authors: J. J. Peña, J. Morales, J. García-Ravelo, L. Arcos-Díaz
Abstract:
The Point canonical transformation method is applied for solving the Schrödinger equation with position-dependent mass. This class of problem has been solved for continuous mass distributions. In this work, a staggered mass distribution for the case of a free particle in an infinite square well potential has been proposed. The continuity conditions as well as normalization for the wave function are also considered. The proposal can be used for dealing with other kind of staggered mass distributions in the Schrödinger equation with different quantum potentials.Keywords: free particle, point canonical transformation method, position-dependent mass, staggered mass distribution
Procedia PDF Downloads 4035966 Dynamic Analysis and Clutch Adaptive Prefill in Dual Clutch Transmission
Authors: Bin Zhou, Tongli Lu, Jianwu Zhang, Hongtao Hao
Abstract:
Dual clutch transmissions (DCT) offer a high comfort performance in terms of the gearshift. Hydraulic multi-disk clutches are the key components of DCT, its engagement determines the shifting comfort. The prefill of the clutches requests an initial engagement which the clutches just contact against each other but not transmit substantial torque from the engine, this initial clutch engagement point is called the touch point. Open-loop control is typically implemented for the clutch prefill, a lot of uncertainties, such as oil temperature and clutch wear, significantly affects the prefill, probably resulting in an inappropriate touch point. Underfill causes the engine flaring in gearshift while overfill arises clutch tying up, both deteriorating the shifting comfort of DCT. Therefore, it is important to enable an adaptive capacity for the clutch prefills regarding the uncertainties. In this paper, a dynamic model of the hydraulic actuator system is presented, including the variable force solenoid and clutch piston, and validated by a test. Subsequently, the open-loop clutch prefill is simulated based on the proposed model. Two control parameters of the prefill, fast fill time and stable fill pressure is analyzed with regard to the impact on the prefill. The former has great effects on the pressure transients, the latter directly influences the touch point. Finally, an adaptive method is proposed for the clutch prefill during gear shifting, in which clutch fill control parameters are adjusted adaptively and continually. The adaptive strategy is changing the stable fill pressure according to the current clutch slip during a gearshift, improving the next prefill process. The stable fill pressure is increased by means of the clutch slip while underfill and decreased with a constant value for overfill. The entire strategy is designed in the Simulink/Stateflow, and implemented in the transmission control unit with optimization. Road vehicle test results have shown the strategy realized its adaptive capability and proven it improves the shifting comfort.Keywords: clutch prefill, clutch slip, dual clutch transmission, touch point, variable force solenoid
Procedia PDF Downloads 3085965 A Contribution to Blockchain Privacy
Authors: Malika Yaici, Feriel Lalaoui, Lydia Belhoul
Abstract:
As a new distributed point-to-point (P2P) technology, blockchain has become a very broad field of research, addressing various challenges including privacy preserving as is the case in all other technologies. In this work, a study of the existing solutions to the problems related to private life in general and in blockchains in particular is performed. User anonymity and transaction confidentiality are the two main challenges for the protection of privacy in blockchains. Mixing mechanisms and cryptographic solutions respond to this problem but remain subject to attacks and suffer from shortcomings. Taking into account these imperfections and the synthesis of our study, we present a mixing model without trusted third parties, based on group signatures allowing reinforcing the anonymity of the users, the confidentiality of the transactions, with minimal turnaround time and without mixing costs.Keywords: anonymity, blockchain, mixing coins, privacy
Procedia PDF Downloads 105964 Strong Convergence of an Iterative Sequence in Real Banach Spaces with Kadec Klee Property
Authors: Umar Yusuf Batsari
Abstract:
Let E be a uniformly smooth and uniformly convex real Banach space and C be a nonempty, closed and convex subset of E. Let $V= \{S_i : C\to C, ~i=1, 2, 3\cdots N\}$ be a convex set of relatively nonexpansive mappings containing identity. In this paper, an iterative sequence obtained from CQ algorithm was shown to have strongly converge to a point $\hat{x}$ which is a common fixed point of relatively nonexpansive mappings in V and also solve the system of equilibrium problems in E. The result improve some existing results in the literature.Keywords: relatively nonexpansive mappings, strong convergence, equilibrium problems, uniformly smooth space, uniformly convex space, convex set, kadec klee property
Procedia PDF Downloads 4225963 Application of KL Divergence for Estimation of Each Metabolic Pathway Genes
Authors: Shohei Maruyama, Yasuo Matsuyama, Sachiyo Aburatani
Abstract:
The development of the method to annotate unknown gene functions is an important task in bioinformatics. One of the approaches for the annotation is The identification of the metabolic pathway that genes are involved in. Gene expression data have been utilized for the identification, since gene expression data reflect various intracellular phenomena. However, it has been difficult to estimate the gene function with high accuracy. It is considered that the low accuracy of the estimation is caused by the difficulty of accurately measuring a gene expression. Even though they are measured under the same condition, the gene expressions will vary usually. In this study, we proposed a feature extraction method focusing on the variability of gene expressions to estimate the genes' metabolic pathway accurately. First, we estimated the distribution of each gene expression from replicate data. Next, we calculated the similarity between all gene pairs by KL divergence, which is a method for calculating the similarity between distributions. Finally, we utilized the similarity vectors as feature vectors and trained the multiclass SVM for identifying the genes' metabolic pathway. To evaluate our developed method, we applied the method to budding yeast and trained the multiclass SVM for identifying the seven metabolic pathways. As a result, the accuracy that calculated by our developed method was higher than the one that calculated from the raw gene expression data. Thus, our developed method combined with KL divergence is useful for identifying the genes' metabolic pathway.Keywords: metabolic pathways, gene expression data, microarray, Kullback–Leibler divergence, KL divergence, support vector machines, SVM, machine learning
Procedia PDF Downloads 4035962 Searching k-Nearest Neighbors to be Appropriate under Gaming Environments
Authors: Jae Moon Lee
Abstract:
In general, algorithms to find continuous k-nearest neighbors have been researched on the location based services, monitoring periodically the moving objects such as vehicles and mobile phone. Those researches assume the environment that the number of query points is much less than that of moving objects and the query points are not moved but fixed. In gaming environments, this problem is when computing the next movement considering the neighbors such as flocking, crowd and robot simulations. In this case, every moving object becomes a query point so that the number of query point is same to that of moving objects and the query points are also moving. In this paper, we analyze the performance of the existing algorithms focused on location based services how they operate under gaming environments.Keywords: flocking behavior, heterogeneous agents, similarity, simulation
Procedia PDF Downloads 3025961 Bed Scenes Allurement as Entertainment and Selling Point in Nigeria's Nollywood Movie Industry
Authors: Ojinime E. Ojiakor, Allen N. Adum
Abstract:
We report on bed scenes allurement as entertainment and selling point in Nigeria’s Nollywood movie industry. In recent times, there has been an increase in the portrayal of bed scenes in Nollywood movies. Before now, Nigerian film producers have been very conservative when it comes to showing sex and nudity. This appears to have changed in line with global trends. Movie industries all over the world appear a haven for delectable women who glamorize our screens, not only with their beauty but also their acting skills. At Hollywood, Bollywood, Ghollywood and the like, pretty actresses with sensuous endowments engage in bed scenes which allure the minds of viewers. The idea that, a ravishing beauty on cast is as good as a box office hit apparently drives Nigerian film producers to incorporate bed scenes in their movies. In this era of sex crusade where what sells is sex and maybe a little bit of violence, there is the suggestion that producers believe that if the talent of an actress doesn’t do the trick, the sexiness she exudes is bound to get attention. Against this backdrop, our study examined bed scenes depiction by Nollywood films, in an attempt to establish if their allurement influences the choice of movie and purchase decisions of target markets. We assessed Nollywood films and viewer preference using the mixed method approach. Our findings reveal that bed scenes, as portrayed in Nigerian movies are a significant determinant of which films to watch and which films to purchase among the respondents studied.Keywords: allurement, bed scenes, nollywood, selling point
Procedia PDF Downloads 2735960 Estimating the Receiver Operating Characteristic Curve from Clustered Data and Case-Control Studies
Authors: Yalda Zarnegarnia, Shari Messinger
Abstract:
Receiver operating characteristic (ROC) curves have been widely used in medical research to illustrate the performance of the biomarker in correctly distinguishing the diseased and non-diseased groups. Correlated biomarker data arises in study designs that include subjects that contain same genetic or environmental factors. The information about correlation might help to identify family members at increased risk of disease development, and may lead to initiating treatment to slow or stop the progression to disease. Approaches appropriate to a case-control design matched by family identification, must be able to accommodate both the correlation inherent in the design in correctly estimating the biomarker’s ability to differentiate between cases and controls, as well as to handle estimation from a matched case control design. This talk will review some developed methods for ROC curve estimation in settings with correlated data from case control design and will discuss the limitations of current methods for analyzing correlated familial paired data. An alternative approach using Conditional ROC curves will be demonstrated, to provide appropriate ROC curves for correlated paired data. The proposed approach will use the information about the correlation among biomarker values, producing conditional ROC curves that evaluate the ability of a biomarker to discriminate between diseased and non-diseased subjects in a familial paired design.Keywords: biomarker, correlation, familial paired design, ROC curve
Procedia PDF Downloads 2395959 Bayesian Inference for High Dimensional Dynamic Spatio-Temporal Models
Authors: Sofia M. Karadimitriou, Kostas Triantafyllopoulos, Timothy Heaton
Abstract:
Reduced dimension Dynamic Spatio-Temporal Models (DSTMs) jointly describe the spatial and temporal evolution of a function observed subject to noise. A basic state space model is adopted for the discrete temporal variation, while a continuous autoregressive structure describes the continuous spatial evolution. Application of such a DSTM relies upon the pre-selection of a suitable reduced set of basic functions and this can present a challenge in practice. In this talk, we propose an online estimation method for high dimensional spatio-temporal data based upon DSTM and we attempt to resolve this issue by allowing the basis to adapt to the observed data. Specifically, we present a wavelet decomposition in order to obtain a parsimonious approximation of the spatial continuous process. This parsimony can be achieved by placing a Laplace prior distribution on the wavelet coefficients. The aim of using the Laplace prior, is to filter wavelet coefficients with low contribution, and thus achieve the dimension reduction with significant computation savings. We then propose a Hierarchical Bayesian State Space model, for the estimation of which we offer an appropriate particle filter algorithm. The proposed methodology is illustrated using real environmental data.Keywords: multidimensional Laplace prior, particle filtering, spatio-temporal modelling, wavelets
Procedia PDF Downloads 4275958 Efficient Positioning of Data Aggregation Point for Wireless Sensor Network
Authors: Sifat Rahman Ahona, Rifat Tasnim, Naima Hassan
Abstract:
Data aggregation is a helpful technique for reducing the data communication overhead in wireless sensor network. One of the important tasks of data aggregation is positioning of the aggregator points. There are a lot of works done on data aggregation. But, efficient positioning of the aggregators points is not focused so much. In this paper, authors are focusing on the positioning or the placement of the aggregation points in wireless sensor network. Authors proposed an algorithm to select the aggregators positions for a scenario where aggregator nodes are more powerful than sensor nodes.Keywords: aggregation point, data communication, data aggregation, wireless sensor network
Procedia PDF Downloads 1575957 Mechanical Properties of Kenaf Reinforced Composite with Different Fiber Orientation
Authors: Y. C. Ching, K. H. Chong
Abstract:
The increasing of environmental awareness has led to grow interest in the expansion of materials with eco-friendly attributes. In this study, a 3 ply sandwich layer of kenaf fiber reinforced unsaturated polyester with various fiber orientations was developed. The effect of the fiber orientation on mechanical and thermal stability properties of polyester was studied. Unsaturated polyester as a face sheets and kenaf fibers as a core was fabricated with combination of hand lay-up process and cold compression method. Tested result parameters like tensile, flexural, impact strength, melting point, and crystallization point were compared and recorded based on different fiber orientation. The failure mechanism and property changes associated with directional change of fiber to polyester composite were discussed.Keywords: kenaf fiber, polyester, tensile, thermal stability
Procedia PDF Downloads 3585956 Assessment of DNA Degradation Using Comet Assay: A Versatile Technique for Forensic Application
Authors: Ritesh K. Shukla
Abstract:
Degradation of biological samples in terms of macromolecules (DNA, RNA, and protein) are the major challenges in the forensic investigation which misleads the result interpretation. Currently, there are no precise methods available to circumvent this problem. Therefore, at the preliminary level, some methods are urgently needed to solve this issue. In this order, Comet assay is one of the most versatile, rapid and sensitive molecular biology technique to assess the DNA degradation. This technique helps to assess DNA degradation even at very low amount of sample. Moreover, the expedient part of this method does not require any additional process of DNA extraction and isolation during DNA degradation assessment. Samples directly embedded on agarose pre-coated microscopic slide and electrophoresis perform on the same slide after lysis step. After electrophoresis microscopic slide stained by DNA binding dye and observed under fluorescent microscope equipped with Komet software. With the help of this technique extent of DNA degradation can be assessed which can help to screen the sample before DNA fingerprinting, whether it is appropriate for DNA analysis or not. This technique not only helps to assess degradation of DNA but many other challenges in forensic investigation such as time since deposition estimation of biological fluids, repair of genetic material from degraded biological sample and early time since death estimation could also be resolved. With the help of this study, an attempt was made to explore the application of well-known molecular biology technique that is Comet assay in the field of forensic science. This assay will open avenue in the field of forensic research and development.Keywords: comet assay, DNA degradation, forensic, molecular biology
Procedia PDF Downloads 1555955 A Lightweight Authentication and Key Exchange Protocol Design for Smart Homes
Authors: Zhifu Li, Lei Li, Wanting Zhou, Yuanhang He
Abstract:
This paper proposed a lightweight certificate-less authentication and key exchange protocol (Light-CL-PKC) based on elliptic curve cryptography and the Chinese Remainder Theorem for smart home scenarios. Light-CL-PKC can efficiently reduce the computational cost of both sides of authentication by forgoing time-consuming bilinear pair operations and making full use of point-addition and point-multiplication operations on elliptic curves. The authentication and key exchange processes in this system are also completed in a a single round of communication between the two parties. The analysis result demonstrates that it can significantly minimize the communication overhead of more than 32.14% compared with the referenced protocols, while the runtime for both authentication and key exchange have also been significantly reduced.Keywords: authentication, key exchange, certificateless public key cryptography, elliptic curve cryptography
Procedia PDF Downloads 985954 Estimation of Normalized Glandular Doses Using a Three-Layer Mammographic Phantom
Authors: Kuan-Jen Lai, Fang-Yi Lin, Shang-Rong Huang, Yun-Zheng Zeng, Po-Chieh Hsu, Jay Wu
Abstract:
The normalized glandular dose (DgN) estimates the energy deposition of mammography in clinical practice. The Monte Carlo simulations frequently use uniformly mixed phantom for calculating the conversion factor. However, breast tissues are not uniformly distributed, leading to errors of conversion factor estimation. This study constructed a three-layer phantom to estimated more accurate of normalized glandular dose. In this study, MCNP code (Monte Carlo N-Particles code) was used to create the geometric structure. We simulated three types of target/filter combinations (Mo/Mo, Mo/Rh, Rh/Rh), six voltages (25 ~ 35 kVp), six HVL parameters and nine breast phantom thicknesses (2 ~ 10 cm) for the three-layer mammographic phantom. The conversion factor for 25%, 50% and 75% glandularity was calculated. The error of conversion factors compared with the results of the American College of Radiology (ACR) was within 6%. For Rh/Rh, the difference was within 9%. The difference between the 50% average glandularity and the uniform phantom was 7.1% ~ -6.7% for the Mo/Mo combination, voltage of 27 kVp, half value layer of 0.34 mmAl, and breast thickness of 4 cm. According to the simulation results, the regression analysis found that the three-layer mammographic phantom at 0% ~ 100% glandularity can be used to accurately calculate the conversion factors. The difference in glandular tissue distribution leads to errors of conversion factor calculation. The three-layer mammographic phantom can provide accurate estimates of glandular dose in clinical practice.Keywords: Monte Carlo simulation, mammography, normalized glandular dose, glandularity
Procedia PDF Downloads 1895953 Simulation Modelling of the Transmission of Concentrated Solar Radiation through Optical Fibres to Thermal Application
Authors: M. Rahou, A. J. Andrews, G. Rosengarten
Abstract:
One of the main challenges in high-temperature solar thermal applications transfer concentrated solar radiation to the load with minimum energy loss and maximum overall efficiency. The use of a solar concentrator in conjunction with bundled optical fibres has potential advantages in terms of transmission energy efficiency, technical feasibility and cost-effectiveness compared to a conventional heat transfer system employing heat exchangers and a heat transfer fluid. In this paper, a theoretical and computer simulation method is described to estimate the net solar radiation transmission from a solar concentrator into and through optical fibres to a thermal application at the end of the fibres over distances of up to 100 m. A key input to the simulation is the angular distribution of radiation intensity at each point across the aperture plane of the optical fibre. This distribution depends on the optical properties of the solar concentrator, in this case, a parabolic mirror with a small secondary mirror with a common focal point and a point-focus Fresnel lens to give a collimated beam that pass into the optical fibre bundle. Since solar radiation comprises a broad band of wavelengths with very limited spatial coherence over the full range of spectrum only ray tracing models absorption within the fibre and reflections at the interface between core and cladding is employed, assuming no interference between rays. The intensity of the radiation across the exit plane of the fibre is found by integrating across all directions and wavelengths. Results of applying the simulation model to a parabolic concentrator and point-focus Fresnel lens with typical optical fibre bundle will be reported, to show how the energy transmission varies with the length of fibre.Keywords: concentrated radiation, fibre bundle, parabolic dish, fresnel lens, transmission
Procedia PDF Downloads 5645952 Assessment of the Electrical, Mechanical, and Thermal Nociceptive Thresholds for Stimulation and Pain Measurements at the Bovine Hind Limb
Authors: Samaneh Yavari, Christiane Pferrer, Elisabeth Engelke, Alexander Starke, Juergen Rehage
Abstract:
Background: Three nociceptive thresholds of thermal, electrical, and mechanical thresholds commonly use to evaluate the local anesthesia in many species, for instance, cow, horse, cat, dog, rabbit, and so on. Due to the lack of investigations to evaluate and/or validate such those nociceptive thresholds, our plan was the comparison of two-foot local anesthesia methods of Intravenous Regional Anesthesia (IVRA) and our modified four-point Nerve Block Anesthesia (NBA). Materials and Methods: Eight healthy nonpregnant nondairy Holstein Frisian cows in a cross-over study design were selected for this study. All cows divided into two different groups to receive two local anesthesia techniques of IVRA and our modified four-point NBA. Three thermal, electrical, and mechanical force and pinpricks were applied to evaluate the quality of local anesthesia methods before and after local anesthesia application. Results: The statistical evaluation demonstrated that our four-point NBA has a qualification to select as a standard foot local anesthesia. However, the recorded results of our study revealed no significant difference between two groups of local anesthesia techniques of IVRA and modified four-point NBA related to quality and duration of anesthesia stimulated by electrical, mechanical and thermal nociceptive stimuli. Conclusion and discussion: All three nociceptive threshold stimuli of electrical, mechanical and heat nociceptive thresholds can be applied to measure and evaluate the efficacy of foot local anesthesia of dairy cows. However, our study revealed no superiority of those three nociceptive methods to evaluate the duration and quality of bovine foot local anesthesia methods. Veterinarians to investigate the duration and quality of their selected anesthesia method can use any of those heat, mechanical, and electrical methods.Keywords: mechanical, thermal, electrical threshold, IVRA, NBA, hind limb, dairy cow
Procedia PDF Downloads 2455951 Earnings vs Cash Flows: The Valuation Perspective
Authors: Megha Agarwal
Abstract:
The research paper is an effort to compare the earnings based and cash flow based methods of valuation of an enterprise. The theoretically equivalent methods based on either earnings such as Residual Earnings Model (REM), Abnormal Earnings Growth Model (AEGM), Residual Operating Income Method (ReOIM), Abnormal Operating Income Growth Model (AOIGM) and its extensions multipliers such as price/earnings ratio, price/book value ratio; or cash flow based models such as Dividend Valuation Method (DVM) and Free Cash Flow Method (FCFM) all provide different estimates of valuation of the Indian giant corporate Reliance India Limited (RIL). An ex-post analysis of published accounting and financial data for four financial years from 2008-09 to 2011-12 has been conducted. A comparison of these valuation estimates with the actual market capitalization of the company shows that the complex accounting based model AOIGM provides closest forecasts. These different estimates may be derived due to inconsistencies in discount rate, growth rates and the other forecasted variables. Although inputs for earnings based models may be available to the investor and analysts through published statements, precise estimation of free cash flows may be better undertaken by the internal management. The estimation of value from more stable parameters as residual operating income and RNOA could be considered superior to the valuations from more volatile return on equity.Keywords: earnings, cash flows, valuation, Residual Earnings Model (REM)
Procedia PDF Downloads 375