Search results for: sequential covering method
19182 Implicit Eulerian Fluid-Structure Interaction Method for the Modeling of Highly Deformable Elastic Membranes
Authors: Aymen Laadhari, Gábor Székely
Abstract:
This paper is concerned with the development of a fully implicit and purely Eulerian fluid-structure interaction method tailored for the modeling of the large deformations of elastic membranes in a surrounding Newtonian fluid. We consider a simplified model for the mechanical properties of the membrane, in which the surface strain energy depends on the membrane stretching. The fully Eulerian description is based on the advection of a modified surface tension tensor, and the deformations of the membrane are tracked using a level set strategy. The resulting nonlinear problem is solved by a Newton-Raphson method, featuring a quadratic convergence behavior. A monolithic solver is implemented, and we report several numerical experiments aimed at model validation and illustrating the accuracy of the presented method. We show that stability is maintained for significantly larger time steps.Keywords: finite element method, implicit, level set, membrane, Newton method
Procedia PDF Downloads 30419181 General Purpose Graphic Processing Units Based Real Time Video Tracking System
Authors: Mallikarjuna Rao Gundavarapu, Ch. Mallikarjuna Rao, K. Anuradha Bai
Abstract:
Real Time Video Tracking is a challenging task for computing professionals. The performance of video tracking techniques is greatly affected by background detection and elimination process. Local regions of the image frame contain vital information of background and foreground. However, pixel-level processing of local regions consumes a good amount of computational time and memory space by traditional approaches. In our approach we have explored the concurrent computational ability of General Purpose Graphic Processing Units (GPGPU) to address this problem. The Gaussian Mixture Model (GMM) with adaptive weighted kernels is used for detecting the background. The weights of the kernel are influenced by local regions and are updated by inter-frame variations of these corresponding regions. The proposed system has been tested with GPU devices such as GeForce GTX 280, GeForce GTX 280 and Quadro K2000. The results are encouraging with maximum speed up 10X compared to sequential approach.Keywords: connected components, embrace threads, local weighted kernel, structuring elements
Procedia PDF Downloads 44019180 An Efficient Algorithm of Time Step Control for Error Correction Method
Authors: Youngji Lee, Yonghyeon Jeon, Sunyoung Bu, Philsu Kim
Abstract:
The aim of this paper is to construct an algorithm of time step control for the error correction method most recently developed by one of the authors for solving stiff initial value problems. It is achieved with the generalized Chebyshev polynomial and the corresponding error correction method. The main idea of the proposed scheme is in the usage of the duplicated node points in the generalized Chebyshev polynomials of two different degrees by adding necessary sample points instead of re-sampling all points. At each integration step, the proposed method is comprised of two equations for the solution and the error, respectively. The constructed algorithm controls both the error and the time step size simultaneously and possesses a good performance in the computational cost compared to the original method. Two stiff problems are numerically solved to assess the effectiveness of the proposed scheme.Keywords: stiff initial value problem, error correction method, generalized Chebyshev polynomial, node points
Procedia PDF Downloads 57319179 Backstepping Design and Fractional Differential Equation of Chaotic System
Authors: Ayub Khan, Net Ram Garg, Geeta Jain
Abstract:
In this paper, backstepping method is proposed to synchronize two fractional-order systems. The simulation results show that this method can effectively synchronize two chaotic systems.Keywords: backstepping method, fractional order, synchronization, chaotic system
Procedia PDF Downloads 45819178 Security Risks Assessment: A Conceptualization and Extension of NFC Touch-And-Go Application
Authors: Ku Aina Afiqah Ku Adzman, Manmeet Mahinderjit Singh, Zarul Fitri Zaaba
Abstract:
NFC operates on low-range 13.56 MHz frequency within a distance from 4cm to 10cm, and the applications can be categorized as touch and go, touch and confirm, touch and connect, and touch and explore. NFC applications are vulnerable to various security and privacy attacks such due to its physical nature; unprotected data stored in NFC tag and insecure communication between its applications. This paper aims to determine the likelihood of security risks happening in an NFC technology and application. We present an NFC technology taxonomy covering NFC standards, types of application and various security and privacy attack. Based on observations and the survey presented to evaluate the risk assessment within the touch and go application demonstrates two security attacks that are high risks namely data corruption and DOS attacks. After the risks are determined, risk countermeasures by using AHP is adopted. The guideline and solutions to these two high risks, attacks are later applied to a secure NFC-enabled Smartphone Attendance System.Keywords: Near Field Communication (NFC), risk assessment, multi-criteria decision making, Analytical Hierarchy Process (AHP)
Procedia PDF Downloads 30219177 Obtain the Stress Intensity Factor (SIF) in a Medium Containing a Penny-Shaped Crack by the Ritz Method
Authors: A. Tavangari, N. Salehzadeh
Abstract:
In the crack growth analysis, the Stress Intensity Factor (SIF) is a fundamental prerequisite. In the present study, the mode I stress intensity factor (SIF) of three-dimensional penny-Shaped crack is obtained in an isotropic elastic cylindrical medium with arbitrary dimensions under arbitrary loading at the top of the cylinder, by the semi-analytical method based on the Rayleigh-Ritz method. This method that is based on minimizing the potential energy amount of the whole of the system, gives a very close results to the previous studies. Defining the displacements (elastic fields) by hypothetical functions in a defined coordinate system is the base of this research. So for creating the singularity conditions at the tip of the crack the appropriate terms should be found.Keywords: penny-shaped crack, stress intensity factor, fracture mechanics, Ritz method
Procedia PDF Downloads 36619176 Teaching Pragmatic Coherence in Literary Text: Analysis of Chimamanda Adichie’s Americanah
Authors: Joy Aworo-Okoroh
Abstract:
Literary texts are mirrors of a real-life situation. Thus, authors choose the linguistic items that would best encode their intended meanings and messages. However, words mean more than they seem. The meaning of words is not static rather, it is dynamic as they constantly enter into relationships within a context. Literary texts can only be meaningful if all pragmatic cues are identified and interpreted. Drawing upon Teun Van Djik's theory of local pragmatic coherence, it is established that words enter into relations in a text and these relations account for sequential speech acts in the texts. Comprehension of the text is dependent on the interpretation of these relations.To show the relevance of pragmatic coherence in literary text analysis, ten conversations were selected in Americanah in order to give a clear idea of the pragmatic relations used. The conversations were analysed, identifying the speech act and epistemic relations inherent in them. A subtle analysis of the structure of the conversations was also carried out. It was discovered that justification is the most commonly used relation and the meaning of the text is dependent on the interpretation of these instances' pragmatic coherence. The study concludes that to effectively teach literature in English, pragmatic coherence should be incorporated as words mean more than they say.Keywords: pragmatic coherence, epistemic coherence, speech act, Americanah
Procedia PDF Downloads 13619175 Degradation of Polycyclic Aromatic Hydrocarbons-Contaminated Soil by Proxy-Acid Method
Authors: Reza Samsami
Abstract:
The aim of the study was to degradation of polycyclic aromatic hydrocarbons (PAHs) by proxy-acid method. The amounts of PAHs were determined in a silty-clay soil sample of an aged oil refinery field in Abadan, Iran. Proxy-acid treatment method was investigated. The results have shown that the proxy-acid system is an effective method for degradation of PAHs. The results also demonstrated that the number of fused aromatic rings have not significant effects on PAH removal by proxy-acid method. The results also demonstrated that the number of fused aromatic rings have not significant effects on PAH removal by proxy-acid method.Keywords: proxy-acid treatment, silty-clay soil, PAHs, degradation
Procedia PDF Downloads 26719174 Groundwater Geophysical Studies in the Developed and Sub-Urban BBMP Area, Bangalore, Karnataka, South India
Authors: G. Venkatesha, Urs Samarth, H. K. Ramaraju, Arun Kumar Sharma
Abstract:
The projection for Groundwater states that the total domestic water demand for greater Bangalore would increase from 1,170 MLD in 2010 to 1,336 MLD in 2016. Dependence on groundwater is ever increasing due to rapid Industrialization & Urbanization. It is estimated that almost 40% of the population of Bangalore is dependent on groundwater. Due to the unscientific disposal of domestic and industrial waste generated, groundwater is getting highly polluted in the city. The scale of this impact will depend mainly upon the water-service infrastructure, the superficial geology and the regional setting. The quality of ground water is equally important as that of quantity. Jointed and fractured granites and gneisses constitute the major aquifer system of BBMP area. Two new observatory Borewells were drilled and lithology report has been prepared. Petrographic Analysis (XRD/XRF) and Water quality Analysis were carried out as per the standard methods. Petrographic samples were analysed by collecting chip of rock from the borewell for every 20ft depth, most of the samples were similar and samples were identified as Biotite-Gneiss, Schistose Amphibolite. Water quality analysis was carried out for individual chemical parameters for two borewells drilled. 1st Borewell struck water at 150ft (Total depth-200ft) & 2nd struck at 740ft (Total depth-960ft). 5 water samples were collected till end of depth in each borewell. Chemical parameter values such as, Total Hardness (360-348, 280-320) mg/ltr, Nitrate (12.24-13.5, 45-48) mg/ltr, Chloride (104-90, 70-70)mg/ltr, Fe (0.75-0.09, 1.288-0.312)mg/ltr etc. are calculated respectively. Water samples were analysed from various parts of BBMP covering 750 sq kms, also thematic maps (IDW method) of water quality is generated for these samples for Post-Monsoon season. The study aims to explore the sub-surface Lithological layers and the thickness of weathered zone, which indirectly helps to know the Groundwater pollution source near surface water bodies, dug wells, etc. The above data are interpreted for future ground water resources planning and management.Keywords: lithology, petrographic, pollution, urbanization
Procedia PDF Downloads 29319173 Research of Possibilities to Influence the Metal Cross-Section Deformation during Cold Rolling with the Help of Local Deformation Zone Creation
Authors: A. Pesin, D. Pustovoytov, A. Kolesnik, M. Sverdlik
Abstract:
Rolling disturbances often arise which might lead to defects such as nonflatness, warpage, corrugation, etc. Numerous methods of compensation for such disturbances are well known. However, most of them preserve the initial form of transverse flow of the strip, such as convex, concave or asymmetric (for example, sphenoid). Sometimes, the form inherited (especially asymmetric) is undesirable. Technical solutions have been developed which include providing conditions for transverse metal flow in deformation zone. It should be noted that greater reduction is followed by transverse flow increase, while less reduction causes a corresponding decrease in metal flow for differently deformed metal lengths to remain approximately the same and in order to avoid the defects mentioned above. One of the solutions suggests sequential strip deforming from rectangular cross-section profile with periodical rectangular grooves back into rectangular profile again. The work was carried out in DEFORM 3D program complex. Experimental rolling was performed on laboratory mill 150. Comparison of experimental and theoretical results demonstrated good correlation.Keywords: FEM, cross-section deformation, mechanical engineering, applied mechanics
Procedia PDF Downloads 34819172 Analysis of Scaling Effects on Analog/RF Performance of Nanowire Gate-All-Around MOSFET
Authors: Dheeraj Sharma, Santosh Kumar Vishvakarma
Abstract:
We present a detailed analysis of analog and radiofrequency (RF) performance with different gate lengths for nanowire cylindrical gate (CylG) gate-all-around (GAA) MOSFET. CylG GAA MOSFET not only suppresses the short channel effects (SCEs), it is also a good candidate for analog/RF device due to its high transconductance (gm) and high cutoff frequency (fT ). The presented work would be beneficial for a new generation of RF circuits and systems in a broad range of applications and operating frequency covering the RF spectrum. For this purpose, the analog/RF figures of merit for CylG GAA MOSFET is analyzed in terms of gate to source capacitance (Cgs), gate to drain capacitance (Cgd), transconductance generation factor gm = Id (where Id represents drain current), intrinsic gain, output resistance, fT, maximum frequency of oscillation (fmax) and gain bandwidth (GBW) product.Keywords: Gate-All-Around MOSFET, GAA, output resistance, transconductance generation factor, intrinsic gain, cutoff frequency, fT
Procedia PDF Downloads 39719171 Critical Activity Effect on Project Duration in Precedence Diagram Method
Authors: Salman Ali Nisar, Koshi Suzuki
Abstract:
Precedence Diagram Method (PDM) with its additional relationships i.e., start-to-start, finish-to-finish, and start-to-finish, between activities provides more flexible schedule than traditional Critical Path Method (CPM). But, changing the duration of critical activities in PDM network will have anomalous effect on critical path. Researchers have proposed some classification of critical activity effects. In this paper, we do further study on classifications of critical activity effect and provide more information in detailed. Furthermore, we determine the maximum amount of time for each class of critical activity effect by which the project managers can control the dynamic feature (shortening/lengthening) of critical activities and project duration more efficiently.Keywords: construction project management, critical path method, project scheduling, precedence diagram method
Procedia PDF Downloads 51119170 What Lies Beneath: Kanti Shah’s Children of Midnight
Authors: Vibhushan Subba
Abstract:
B-movies are almost always ‘glanced over’, ‘swept beneath’, ‘hidden from’ and ‘locked away’ to live a secret life; a life that exists but enjoys only a mummified existence behind layers of protective covering. They are more often than not discarded as ‘trash’, ‘sleaze’, ‘porn’ and put down for their ‘bad taste’ or at least that has been the case in India. With the art film entering the realm of high art, the popular and the mainstream has been increasingly equated with the A grade Bollywood film. This leaves the B-movie to survive as a degraded cultural artifact on the fringes of the mainstream. Kanti Shah’s films are part of a secret, traversing the libidinal circuits of the B and C grade through history. His films still circulate like a corporeal reminder of the forbidden and that which is taboo, like a hidden fracture that threatens to split open bourgeois respectability. Seeking to find answers to an aesthetic that has been rejected and hidden, this paper looks at three films of Kanti Shah to see how the notion of taboo, censorship and the unseen coincide, how they operate in the domain of his cinema and try and understand a form that draws our attention to the subterranean forces at work.Keywords: B-movies, trash, taboo, censorship
Procedia PDF Downloads 46119169 Urban Impervious and its Impact on Storm Water Drainage Systems
Authors: Ratul Das, Udit Narayan Das
Abstract:
Surface imperviousness in urban area brings significant changes in storm water drainage systems and some recent studies reveals that the impervious surfaces that passes the storm water runoff directly to drainage systems through storm water collection systems, called directly connected impervious area (DCIA) is an effective parameter rather than total impervious areas (TIA) for computation of surface runoff. In the present study, extension of DCIA and TIA were computed for a small sub-urban area of Agartala, the capital of state Tripura. Total impervious surfaces covering the study area were identified on the existing storm water drainage map from landuse map of the study area in association with field assessments. Also, DCIA assessed through field survey were compared to DCIA computed by empirical relationships provided by other investigators. For the assessment of DCIA in the study area two methods were adopted. First, partitioning the study area into four drainage sub-zones based on average basin slope and laying of existing storm water drainage systems. In the second method, the entire study area was divided into small grids. Each grid or parcel comprised of 20m× 20m area. Total impervious surfaces were delineated from landuse map in association with on-site assessments for efficient determination of DCIA within each sub-area and grid. There was a wide variation in percent connectivity of TIA across each sub-drainage zone and grid. In the present study, total impervious area comprises 36.23% of the study area, in which 21.85% of the total study area is connected to storm water collection systems. Total pervious area (TPA) and others comprise 53.20% and 10.56% of the total area, respectively. TIA recorded by field assessment (36.23%) was considerably higher than that calculated from the available land use map (22%). From the analysis of recoded data, it is observed that the average percentage of connectivity (% DCIA with respect to TIA) is 60.31 %. The analysis also reveals that the observed DCIA lies below the line of optimal impervious surface connectivity for a sub-urban area provided by other investigators and which indicate the probable reason of water logging conditions in many parts of the study area during monsoon period.Keywords: Drainage, imperviousness, runoff, storm water.
Procedia PDF Downloads 35119168 Anaerobic Soil Disinfestation: Feasible Alternative to Soil Chemical Fumigants
Authors: P. Serrano-Pérez, M. C. Rodríguez-Molina, C. Palo, E. Palo, A. Lacasa
Abstract:
Phytophthora nicotianae is the principal causal agent of root and crown rot disease of red pepper plants in Extremadura (Western Spain). There is a need to develop a biologically-based method of soil disinfestation that facilitates profitable and sustainable production without the use of chemical fumigants. Anaerobic Soil Disinfestation (ASD), as well know as biodisinfestation, has been shown to control a wide range of soil-borne pathogens and nematodes in numerous crop production systems. This method implies soil wetting, incorporation of a easily decomposable carbon-rich organic amendment and covering with plastic film for several weeks. ASD with rapeseed cake (var. Tocatta, a glucosinolates-free variety) used as C-source was assayed in spring 2014, before the pepper crop establishment. The field experiment was conducted at the Agricultural Research Centre Finca La Orden (Southwestern Spain) and the treatments were: rapeseed cake (RCP); rapeseed cake without plastic cover (RC); control non-amendment (CP) and control non-amendment without plastic cover (C). The experimental design was a randomized complete block design with four replicates and a plot size of 5 x 5 m. On 26 March, rapeseed cake (1 kg·m-2) was incorporated into the soil with a rotovator. Biological probes with the inoculum were buried at 15 and 30-cm depth (biological probes were previously prepared with 100 g of disinfected soil inoculated with chlamydospores (chlam) of P. nicotianae P13 isolate [100 chlam·g-1 of soil] and wrapped in agryl cloth). Sprinkler irrigation was run until field capacity and the corresponding plots were covered with transparent plastic (PE 0.05 mm). On 6 May plastics were removed, the biological probes were dug out and a bioassay was established. One pepper seedling at the 2 to 4 true-leaves stage was transplanted in the soil from each biological probe. Plants were grown in a climatic chamber and disease symptoms were recorded every week during 2 months. Fragments of roots and crown of symptomatic plants were analyzed on NARPH media and soil from rizospheres was analyzed using carnation petals as baits. Results of “survival” were expressed as the percentage of soil samples where P. nicotianae was detected and results of “infectivity” were expressed as the percentage of diseased plants. No differences were detected in deep effect. Infectivity of P. nicotianae chlamydospores was successfully reduced in RCP treatment (4.2% of infectivity) compared with the controls (41.7% of infectivity). The pattern of survival was similar to infectivity observed by the bioassay: 21% of survival in RCP; 79% in CP; 83% in C and 87% in RC. Although ASD may be an effective alternative to chemical fumigants to pest management, more research is necessary to show their impact on the microbial community and chemistry of the soil.Keywords: biodisinfestation, BSD, soil fumigant alternatives, organic amendments
Procedia PDF Downloads 21719167 Economic Analysis of a Carbon Abatement Technology
Authors: Hameed Rukayat Opeyemi, Pericles Pilidis Pagone Emmanuele, Agbadede Roupa, Allison Isaiah
Abstract:
Climate change represents one of the single most challenging problems facing the world today. According to the National Oceanic and Administrative Association, Atmospheric temperature rose almost 25% since 1958, Artic sea ice has shrunk 40% since 1959 and global sea levels have risen more than 5.5cm since 1990. Power plants are the major culprits of GHG emission to the atmosphere. Several technologies have been proposed to reduce the amount of GHG emitted to the atmosphere from power plant, one of which is the less researched Advanced zero-emission power plant. The advanced zero emission power plants make use of mixed conductive membrane (MCM) reactor also known as oxygen transfer membrane (OTM) for oxygen transfer. The MCM employs membrane separation process. The membrane separation process was first introduced in 1899 when Walter Hermann Nernst investigated electric current between metals and solutions. He found that when a dense ceramic is heated, the current of oxygen molecules move through it. In the bid to curb the amount of GHG emitted to the atmosphere, the membrane separation process was applied to the field of power engineering in the low carbon cycle known as the Advanced zero emission power plant (AZEP cycle). The AZEP cycle was originally invented by Norsk Hydro, Norway and ABB Alstom power (now known as Demag Delaval Industrial turbomachinery AB), Sweden. The AZEP drew a lot of attention because its ability to capture ~100% CO2 and also boasts of about 30-50% cost reduction compared to other carbon abatement technologies, the penalty in efficiency is also not as much as its counterparts and crowns it with almost zero NOx emissions due to very low nitrogen concentrations in the working fluid. The advanced zero emission power plants differ from a conventional gas turbine in the sense that its combustor is substituted with the mixed conductive membrane (MCM-reactor). The MCM-reactor is made up of the combustor, low-temperature heat exchanger LTHX (referred to by some authors as air preheater the mixed conductive membrane responsible for oxygen transfer and the high-temperature heat exchanger and in some layouts, the bleed gas heat exchanger. Air is taken in by the compressor and compressed to a temperature of about 723 Kelvin and pressure of 2 Mega-Pascals. The membrane area needed for oxygen transfer is reduced by increasing the temperature of 90% of the air using the LTHX; the temperature is also increased to facilitate oxygen transfer through the membrane. The air stream enters the LTHX through the transition duct leading to inlet of the LTHX. The temperature of the air stream is then increased to about 1150 K depending on the design point specification of the plant and the efficiency of the heat exchanging system. The amount of oxygen transported through the membrane is directly proportional to the temperature of air going through the membrane. The AZEP cycle was developed using the Fortran software and economic analysis was conducted using excel and Matlab followed by optimization case study. The Simple bleed gas heat exchange layout (100 % CO2 capture), Bleed gas heat exchanger layout with flue gas turbine (100 % CO2 capture), Pre-expansion reheating layout (Sequential burning layout)–AZEP 85% (85% CO2 capture) and Pre-expansion reheating layout (Sequential burning layout) with flue gas turbine–AZEP 85% (85% CO2 capture). This paper discusses monte carlo risk analysis of four possible layouts of the AZEP cycle.Keywords: gas turbine, global warming, green house gas, fossil fuel power plants
Procedia PDF Downloads 39719166 Parameter Estimation of Gumbel Distribution with Maximum-Likelihood Based on Broyden Fletcher Goldfarb Shanno Quasi-Newton
Authors: Dewi Retno Sari Saputro, Purnami Widyaningsih, Hendrika Handayani
Abstract:
Extreme data on an observation can occur due to unusual circumstances in the observation. The data can provide important information that can’t be provided by other data so that its existence needs to be further investigated. The method for obtaining extreme data is one of them using maxima block method. The distribution of extreme data sets taken with the maxima block method is called the distribution of extreme values. Distribution of extreme values is Gumbel distribution with two parameters. The parameter estimation of Gumbel distribution with maximum likelihood method (ML) is difficult to determine its exact value so that it is necessary to solve the approach. The purpose of this study was to determine the parameter estimation of Gumbel distribution with quasi-Newton BFGS method. The quasi-Newton BFGS method is a numerical method used for nonlinear function optimization without constraint so that the method can be used for parameter estimation from Gumbel distribution whose distribution function is in the form of exponential doubel function. The quasi-New BFGS method is a development of the Newton method. The Newton method uses the second derivative to calculate the parameter value changes on each iteration. Newton's method is then modified with the addition of a step length to provide a guarantee of convergence when the second derivative requires complex calculations. In the quasi-Newton BFGS method, Newton's method is modified by updating both derivatives on each iteration. The parameter estimation of the Gumbel distribution by a numerical approach using the quasi-Newton BFGS method is done by calculating the parameter values that make the distribution function maximum. In this method, we need gradient vector and hessian matrix. This research is a theory research and application by studying several journals and textbooks. The results of this study obtained the quasi-Newton BFGS algorithm and estimation of Gumbel distribution parameters. The estimation method is then applied to daily rainfall data in Purworejo District to estimate the distribution parameters. This indicates that the high rainfall that occurred in Purworejo District decreased its intensity and the range of rainfall that occurred decreased.Keywords: parameter estimation, Gumbel distribution, maximum likelihood, broyden fletcher goldfarb shanno (BFGS)quasi newton
Procedia PDF Downloads 32419165 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study
Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming
Abstract:
Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.Keywords: binary outcomes, statistical methods, clinical trials, simulation study
Procedia PDF Downloads 11419164 Geographic Information System and Ecotourism Sites Identification of Jamui District, Bihar, India
Authors: Anshu Anshu
Abstract:
In the red corridor famed for the Left Wing Extremism, lies small district of Jamui in Bihar, India. The district lies at 24º20´ N latitude and 86º13´ E longitude, covering an area of 3,122.8 km2 The undulating topography, with widespread forests provides pristine environment for invigorating experience of tourists. Natural landscape in form of forests, wildlife, rivers, and cultural landscape dotted with historical and religious places is highly purposive for tourism. The study is primarily related to the identification of potential ecotourism sites, using Geographic Information System. Data preparation, analysis and finally identification of ecotourism sites is done. Secondary data used is Survey of India Topographical Sheets with R.F.1:50,000 covering the area of Jamui district. District Census Handbook, Census of India, 2011; ERDAS Imagine and Arc View is used for digitization and the creation of DEM’s (Digital Elevation Model) of the district, depicting the relief and topography and generate thematic maps. The thematic maps have been refined using the geo-processing tools. Buffer technique has been used for the accessibility analysis. Finally, all the maps, including the Buffer maps were overlaid to find out the areas which have potential for the development of ecotourism sites in the Jamui district. Spatial data - relief, slopes, settlements, transport network and forests of Jamui District were marked and identified, followed by Buffer Analysis that was used to find out the accessibility of features like roads, railway stations to the sites available for the development of ecotourism destinations. Buffer analysis is also carried out to get the spatial proximity of major river banks, lakes, and dam sites to be selected for promoting sustainable ecotourism. Overlay Analysis is conducted using the geo-processing tools. Digital Terrain Model (DEM) generated and relevant themes like roads, forest areas and settlements were draped on the DEM to make an assessment of the topography and other land uses of district to delineate potential zones of ecotourism development. Development of ecotourism in Jamui faces several challenges. The district lies in the portion of Bihar that is part of ‘red corridor’ of India. The hills and dense forests are the prominent hideouts and training ground for the extremists. It is well known that any kind of political instability, war, acts of violence directly influence the travel propensity and hinders all kind of non-essential travels to these areas. The development of ecotourism in the district can bring change and overall growth in this area with communities getting more involved in economically sustainable activities. It is a known fact that poverty and social exclusion are the main force that pushes people, resorting towards violence. All over the world tourism has been used as a tool to eradicate poverty and generate good will among people. Tourism, in sustainable form should be promoted in the district to integrate local communities in the development process and to distribute fruits of development with equity.Keywords: buffer analysis, digital elevation model, ecotourism, red corridor
Procedia PDF Downloads 25919163 A Mixed Methods Study Aimed at Exploring the Conceptualization of Orthorexia Nervosa on Instagram
Authors: Elena V. Syurina, Sophie Renckens, Martina Valente
Abstract:
Objective: The objective of this study was to investigate the nature of the conversation around orthorexia nervosa (ON) on Instagram. Methods: The present study was conducted using mixed methods, combining a concurrent triangulation and sequential explanatory design. First, 3027 pictures posted on Instagram using #Orthorexia were analyzed. Then, a questionnaire about Instagram use related to ON was completed entirely by 185 respondents. These two quantitative data sources were statistically analyzed and triangulated afterwards. Finally, 9 interviews were conducted, to more deeply investigate what is being said about ON on Instagram and what the motivations to post about it are. Results: Four main categories of pictures were found to be represented in Instagram posts about ON: ‘food’, ‘people’, ‘text’, and ‘other.’ Savory and unprocessed food was most highly represented within the food category, and pictures of people were mostly pictures of the account holder. People who self-identify as having ON were more likely to post about ON, and they were significantly more likely to post about ‘food’, ‘people’ and ‘text.’ The goal of the posts was to raise awareness around ON, as well as to provide support for people who believe to be suffering from it. Conclusion: Since the conversation around ON on Instagram is supportive, it could be beneficial to consider Instagram use in the treatment of ON. However, more research is needed on a larger scale.Keywords: orthorexia nervosa, Instagram, social media, disordered eating
Procedia PDF Downloads 13819162 Implementation of a Method of Crater Detection Using Principal Component Analysis in FPGA
Authors: Izuru Nomura, Tatsuya Takino, Yuji Kageyama, Shin Nagata, Hiroyuki Kamata
Abstract:
We propose a method of crater detection from the image of the lunar surface captured by the small space probe. We use the principal component analysis (PCA) to detect craters. Nevertheless, considering severe environment of the space, it is impossible to use generic computer in practice. Accordingly, we have to implement the method in FPGA. This paper compares FPGA and generic computer by the processing time of a method of crater detection using principal component analysis.Keywords: crater, PCA, eigenvector, strength value, FPGA, processing time
Procedia PDF Downloads 55519161 MapReduce Logistic Regression Algorithms with RHadoop
Authors: Byung Ho Jung, Dong Hoon Lim
Abstract:
Logistic regression is a statistical method for analyzing a dataset in which there are one or more independent variables that determine an outcome. Logistic regression is used extensively in numerous disciplines, including the medical and social science fields. In this paper, we address the problem of estimating parameters in the logistic regression based on MapReduce framework with RHadoop that integrates R and Hadoop environment applicable to large scale data. There exist three learning algorithms for logistic regression, namely Gradient descent method, Cost minimization method and Newton-Rhapson's method. The Newton-Rhapson's method does not require a learning rate, while gradient descent and cost minimization methods need to manually pick a learning rate. The experimental results demonstrated that our learning algorithms using RHadoop can scale well and efficiently process large data sets on commodity hardware. We also compared the performance of our Newton-Rhapson's method with gradient descent and cost minimization methods. The results showed that our newton's method appeared to be the most robust to all data tested.Keywords: big data, logistic regression, MapReduce, RHadoop
Procedia PDF Downloads 28419160 Red Meat Price Volatility and Its' Relationship with Crude Oil and Exchange Rate
Authors: Melek Akay
Abstract:
Turkey's agricultural commodity prices are prone to fluctuation but have gradually over time. A considerable amount of literature examines the changes in these prices by dealing with other commodities such as energy. Links between agricultural and energy markets have therefore been extensively investigated. Since red meat prices are becoming increasingly volatile in Turkey, this paper analyses the price volatility of veal, lamb and the relationship between red meat and crude oil, exchange rates by applying the generalize all period unconstraint volatility model, which generalises the GARCH (p, q) model for analysing weekly data covering a period of May 2006 to February 2017. Empirical results show that veal and lamb prices present volatility during the last decade, but particularly between 2009 and 2012. Moreover, oil prices have a significant effect on veal and lamb prices as well as their previous periods. Consequently, our research can lead policy makers to evaluate policy implementation in the appropriate way and reduce the impacts of oil prices by supporting producers.Keywords: red meat price, volatility, crude oil, exchange rates, GARCH models, Turkey
Procedia PDF Downloads 12219159 Impact of Financial Factors on Total Factor Productivity: Evidence from Indian Manufacturing Sector
Authors: Lopamudra D. Satpathy, Bani Chatterjee, Jitendra Mahakud
Abstract:
The rapid economic growth in terms of output and investment necessitates a substantial growth of Total Factor Productivity (TFP) of firms which is an indicator of an economy’s technological change. The strong empirical relationship between financial sector development and economic growth clearly indicates that firms financing decisions do affect their levels of output via their investment decisions. Hence it establishes a linkage between the financial factors and productivity growth of the firms. To achieve the smooth and continuous economic growth over time, it is imperative to understand the financial channel that serves as one of the vital channels. The theoretical or logical argument behind this linkage is that when the internal financial capital is not sufficient enough for the investment, the firms always rely upon the external sources of finance. But due to the frictions and existence of information asymmetric behavior, it is always costlier for the firms to raise the external capital from the market, which in turn affect their investment sentiment and productivity. This kind of financial position of the firms puts heavy pressure on their productive activities. Keeping in view this theoretical background, the present study has tried to analyze the role of both external and internal financial factors (leverage, cash flow and liquidity) on the determination of total factor productivity of the firms of manufacturing industry and its sub-industries, maintaining a set of firm specific variables as control variables (size, age and disembodied technological intensity). An estimate of total factor productivity of the Indian manufacturing industry and sub-industries is computed using a semi-parametric approach, i.e., Levinsohn- Petrin method. It establishes the relationship between financial factors and productivity growth of 652 firms using a dynamic panel GMM method covering the time period between 1997-98 and 2012-13. From the econometric analyses, it has been found that the internal cash flow has a positive and significant impact on the productivity of overall manufacturing sector. The other financial factors like leverage and liquidity also play the significant role in the determination of total factor productivity of the Indian manufacturing sector. The significant role of internal cash flow on determination of firm-level productivity suggests that access to external finance is not available to Indian companies easily. Further, the negative impact of leverage on productivity could be due to the less developed bond market in India. These findings have certain implications for the policy makers to take various policy reforms to develop the external bond market and easily workout through which the financially constrained companies will be able to raise the financial capital in a cost-effective manner and would be able to influence their investments in the highly productive activities, which would help for the acceleration of economic growth.Keywords: dynamic panel, financial factors, manufacturing sector, total factor productivity
Procedia PDF Downloads 33219158 An Optimized Method for 3D Magnetic Navigation of Nanoparticles inside Human Arteries
Authors: Evangelos G. Karvelas, Christos Liosis, Andreas Theodorakakos, Theodoros E. Karakasidis
Abstract:
In the present work, a numerical method for the estimation of the appropriate gradient magnetic fields for optimum driving of the particles into the desired area inside the human body is presented. The proposed method combines Computational Fluid Dynamics (CFD), Discrete Element Method (DEM) and Covariance Matrix Adaptation (CMA) evolution strategy for the magnetic navigation of nanoparticles. It is based on an iteration procedure that intents to eliminate the deviation of the nanoparticles from a desired path. Hence, the gradient magnetic field is constantly adjusted in a suitable way so that the particles’ follow as close as possible to a desired trajectory. Using the proposed method, it is obvious that the diameter of particles is crucial parameter for an efficient navigation. In addition, increase of particles' diameter decreases their deviation from the desired path. Moreover, the navigation method can navigate nanoparticles into the desired areas with efficiency approximately 99%.Keywords: computational fluid dynamics, CFD, covariance matrix adaptation evolution strategy, discrete element method, DEM, magnetic navigation, spherical particles
Procedia PDF Downloads 14219157 Meta-Instruction Theory in Mathematics Education and Critique of Bloom’s Theory
Authors: Abdollah Aliesmaeili
Abstract:
The purpose of this research is to present a different perspective on the basic math teaching method called meta-instruction, which reverses the learning path. Meta-instruction is a method of teaching in which the teaching trajectory starts from brain education into learning. This research focuses on the behavior of the mind during learning. In this method, students are not instructed in mathematics, but they are educated. Another goal of the research is to "criticize Bloom's classification in the cognitive domain and reverse it", because it cannot meet the educational and instructional needs of the new generation and "substituting math education instead of math teaching". This is an indirect method of teaching. The method of research is longitudinal through four years. Statistical samples included students ages 6 to 11. The research focuses on improving the mental abilities of children to explore mathematical rules and operations by playing only with eight measurements (any years 2 examinations). The results showed that there is a significant difference between groups in remembering, understanding, and applying. Moreover, educating math is more effective than instructing in overall learning abilities.Keywords: applying, Bloom's taxonomy, brain education, mathematics teaching method, meta-instruction, remembering, starmath method, understanding
Procedia PDF Downloads 2319156 Effect of Type of Pile and Its Installation Method on Pile Bearing Capacity by Physical Modelling in Frustum Confining Vessel
Authors: Seyed Abolhasan Naeini, M. Mortezaee
Abstract:
Various factors such as the method of installation, the pile type, the pile material and the pile shape, can affect the final bearing capacity of a pile executed in the soil; among them, the method of installation is of special importance. The physical modeling is among the best options in the laboratory study of the piles behavior. Therefore, the current paper first presents and reviews the frustum confining vesel (FCV) as a suitable tool for physical modeling of deep foundations. Then, by describing the loading tests of two open-ended and closed-end steel piles, each of which has been performed in two methods, “with displacement" and "without displacement", the effect of end conditions and installation method on the final bearing capacity of the pile is investigated. The soil used in the current paper is silty sand of Firoozkooh. The results of the experiments show that in general the without displacement installation method has a larger bearing capacity in both piles, and in a specific method of installation the closed ended pile shows a slightly higher bearing capacity.Keywords: physical modeling, frustum confining vessel, pile, bearing capacity, installation method
Procedia PDF Downloads 15319155 Seismic Fragility Functions of RC Moment Frames Using Incremental Dynamic Analyses
Authors: Seung-Won Lee, JongSoo Lee, Won-Jik Yang, Hyung-Joon Kim
Abstract:
A capacity spectrum method (CSM), one of methodologies to evaluate seismic fragilities of building structures, has been long recognized as the most convenient method, even if it contains several limitations to predict the seismic response of structures of interest. This paper proposes the procedure to estimate seismic fragility curves using an incremental dynamic analysis (IDA) rather than the method adopting a CSM. To achieve the research purpose, this study compares the seismic fragility curves of a 5-story reinforced concrete (RC) moment frame obtained from both methods, an IDA method and a CSM. Both seismic fragility curves are similar in slight and moderate damage states whereas the fragility curve obtained from the IDA method presents less variation (or uncertainties) in extensive and complete damage states. This is due to the fact that the IDA method can properly capture the structural response beyond yielding rather than the CSM and can directly calculate higher mode effects. From these observations, the CSM could overestimate seismic vulnerabilities of the studied structure in extensive or complete damage states.Keywords: seismic fragility curve, incremental dynamic analysis, capacity spectrum method, reinforced concrete moment frame
Procedia PDF Downloads 42219154 Approximations of Fractional Derivatives and Its Applications in Solving Non-Linear Fractional Variational Problems
Authors: Harendra Singh, Rajesh Pandey
Abstract:
The paper presents a numerical method based on operational matrix of integration and Ryleigh method for the solution of a class of non-linear fractional variational problems (NLFVPs). Chebyshev first kind polynomials are used for the construction of operational matrix. Using operational matrix and Ryleigh method the NLFVP is converted into a system of non-linear algebraic equations, and solving these equations we obtained approximate solution for NLFVPs. Convergence analysis of the proposed method is provided. Numerical experiment is done to show the applicability of the proposed numerical method. The obtained numerical results are compared with exact solution and solution obtained from Chebyshev third kind. Further the results are shown graphically for different fractional order involved in the problems.Keywords: non-linear fractional variational problems, Rayleigh-Ritz method, convergence analysis, error analysis
Procedia PDF Downloads 29819153 Polarity Classification of Social Media Comments in Turkish
Authors: Migena Ceyhan, Zeynep Orhan, Dimitrios Karras
Abstract:
People in modern societies are continuously sharing their experiences, emotions, and thoughts in different areas of life. The information reaches almost everyone in real-time and can have an important impact in shaping people’s way of living. This phenomenon is very well recognized and advantageously used by the market representatives, trying to earn the most from this means. Given the abundance of information, people and organizations are looking for efficient tools that filter the countless data into important information, ready to analyze. This paper is a modest contribution in this field, describing the process of automatically classifying social media comments in the Turkish language into positive or negative. Once data is gathered and preprocessed, feature sets of selected single words or groups of words are build according to the characteristics of language used in the texts. These features are used later to train, and test a system according to different machine learning algorithms (Naïve Bayes, Sequential Minimal Optimization, J48, and Bayesian Linear Regression). The resultant high accuracies can be important feedback for decision-makers to improve the business strategies accordingly.Keywords: feature selection, machine learning, natural language processing, sentiment analysis, social media reviews
Procedia PDF Downloads 146