Search results for: estimation of inputs and outputs
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2823

Search results for: estimation of inputs and outputs

2043 Price Effect Estimation of Tobacco on Low-wage Male Smokers: A Causal Mediation Analysis

Authors: Kawsar Ahmed, Hong Wang

Abstract:

The study's goal was to estimate the causal mediation impact of tobacco tax before and after price hikes among low-income male smokers, with a particular emphasis on the effect estimating pathways framework for continuous and dichotomous variables. From July to December 2021, a cross-sectional investigation of observational data (n=739) was collected from Bangladeshi low-wage smokers. The Quasi-Bayesian technique, binomial probit model, and sensitivity analysis using a simulation of the computational tools R mediation package had been used to estimate the effect. After a price rise for tobacco products, the average number of cigarettes or bidis sticks taken decreased from 6.7 to 4.56. Tobacco product rising prices have a direct effect on low-income people's decisions to quit or lessen their daily smoking habits of Average Causal Mediation Effect (ACME) [effect=2.31, 95 % confidence interval (C.I.) = (4.71-0.00), p<0.01], Average Direct Effect (ADE) [effect=8.6, 95 percent (C.I.) = (6.8-0.11), p<0.001], and overall significant effects (p<0.001). Tobacco smoking choice is described by the mediated proportion of income effect, which is 26.1% less of following price rise. The curve of ACME and ADE is based on observational figures of the coefficients of determination that asses the model of hypothesis as the substantial consequence after price rises in the sensitivity analysis. To reduce smoking product behaviors, price increases through taxation have a positive causal mediation with income that affects the decision to limit tobacco use and promote low-income men's healthcare policy.

Keywords: causal mediation analysis, directed acyclic graphs, tobacco price policy, sensitivity analysis, pathway estimation

Procedia PDF Downloads 102
2042 Promoting Class Cooperation-Competition (Coo-Petition) and Empowerment to Graduating Architecture Students through a Holistic Planning Approach in Their Thesis Proposals

Authors: Felicisimo Azagra Tejuco Jr.

Abstract:

Mentoring architecture thesis students is a very critical and exhausting task for both the adviser and advisee. It poses the challenges of resource and time management for the candidate while the best professional guidance from the mentor. The University of Santo Tomas (Manila, Philippines) is Asia's oldest university. Among its notable program is its Architecture curriculum. Presently, the five-year Architecture program requires ten semesters of academic coursework. The last three semesters are relevant to each Architecture graduating student's thesis proposal and defense. The thesis proposal is developed and submitted for approval in the subject Research Methods for Architecture (RMA). Data gathering and initial schemes are conducted in Architectural Design (AD), 9, and are finalized and defended in AD 10. In recent years, their graduating students have maintained an average of 300 candidates before the pandemic. They are encouraged to explore any topic of interest or relevance. Since 2019-2020, one thesis class has used a community planning approach in mentoring the class. Compared to other sections, the first meeting of RMA has been allocated for a visioning exercise and assessment of the class's strengths-weaknesses and opportunities-threats (SWOT). Here, the work activities of the group have been finetuned to address some identified concerns while still being aligned with the academic calendar. Occasional peer critics complement class lectures. The course will end with the approval of the student's proposal. The final year or last two semesters of the graduating class will be focused on the approved proposal. Compared to the other class, the 18 weeks of the first semester consist of regular consultations, complemented by lectures from the adviser or guest speakers. Through remote peer consultations, the mentor maximized each meeting in groups of three to five, encouraging constructive criticism among the class. At the end of the first semester, mock presentations to the external jury are conducted to check the design outputs for improvement. The final semester is spent more on the finalization of the plans. Feedback from the previous semester is expected to be integrated into the final outputs. Before the final deliberations, at least two technical rehearsals were conducted per group. Regardless of the outcome, an assessment of each student's performance is held as a class. Personal realizations and observations are encouraged. Through Online surveys, Interviews, and Focused Group Discussions with the former students, the effectiveness of the mentoring strategies was reviewed and evaluated. Initial feedback highlighted the relevance of setting a positive tone for the course, constructive criticisms from peers & experts, and consciousness of deadlines as essential elements for a practical semester.

Keywords: cooperation, competition, student empowerment, class vision

Procedia PDF Downloads 71
2041 Material Saving Strategies, Technologies and Effects on Return on Sales

Authors: Jasna Prester, Najla Podrug, Davor Filipović

Abstract:

Manufacturing companies invest a significant amount of sales into material resources for production. In our sample, 58% of sales is used for manufacturing inputs, while only 24% of sales is used for salaries. This means that if a company is looking to reduce costs, the greater potential is in reduction of material costs than downsizing. This research shows that manufacturing companies in Croatia did realize material savings in last three years. It is also shown by which technologies they achieved materials cost savings. Through literature research, we found research gap as to which technologies reduce material consumption. As methodology of research four regression analyses are used to prove our findings.

Keywords: Croatia, materials savings strategies, technologies, return on sales

Procedia PDF Downloads 287
2040 Multi Objective Near-Optimal Trajectory Planning of Mobile Robot

Authors: Amar Khoukhi, Mohamed Shahab

Abstract:

This paper presents the optimal control problem of mobile robot motion as a nonlinear programming problem (NLP) and solved using a direct method of numerical optimal control. The NLP is initialized with a B-Spline for which node locations are optimized using a genetic search. The system acceleration inputs and sampling periods are considered as optimization variables. Different scenarios with different objectives weights are implemented and investigated. Interesting results are found in terms of complying with the expected behavior of a mobile robot system and time-energy minimization.

Keywords: multi-objective control, non-holonomic systems, mobile robots, nonlinear programming, motion planning, B-spline, genetic algorithm

Procedia PDF Downloads 356
2039 Knowledge Acquisition as Determinant of Outputs of Innovative Business in Regions of the Czech Republic

Authors: P. Hajek, J. Stejskal

Abstract:

The aim of this paper is to analyze the ability to identify and acquire knowledge from external sources at the regional level in the Czech Republic. The results show that the most important sources of knowledge for innovative activities are sources within the businesses themselves, followed by customers and suppliers. Furthermore, the analysis of relationships between the objective of the innovative activity and the ability to identify and acquire knowledge implies that knowledge obtained from a) customers aims at replacing outdated products and increasing product quality; b) suppliers aims at increasing capacity and flexibility of production; and c) competing businesses aims at growing market share and increasing the flexibility of production and services. Regions should therefore direct their support especially into development and strengthening of networks within the value chain.

Keywords: knowledge, acquisition, innovative business, Czech republic, region

Procedia PDF Downloads 357
2038 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses

Authors: Neil Bar, Andrew Heweston

Abstract:

Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.

Keywords: probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability

Procedia PDF Downloads 202
2037 A Two-Stage Bayesian Variable Selection Method with the Extension of Lasso for Geo-Referenced Data

Authors: Georgiana Onicescu, Yuqian Shen

Abstract:

Due to the complex nature of geo-referenced data, multicollinearity of the risk factors in public health spatial studies is a commonly encountered issue, which leads to low parameter estimation accuracy because it inflates the variance in the regression analysis. To address this issue, we proposed a two-stage variable selection method by extending the least absolute shrinkage and selection operator (Lasso) to the Bayesian spatial setting, investigating the impact of risk factors to health outcomes. Specifically, in stage I, we performed the variable selection using Bayesian Lasso and several other variable selection approaches. Then, in stage II, we performed the model selection with only the selected variables from stage I and compared again the methods. To evaluate the performance of the two-stage variable selection methods, we conducted a simulation study with different distributions for the risk factors, using geo-referenced count data as the outcome and Michigan as the research region. We considered the cases when all candidate risk factors are independently normally distributed, or follow a multivariate normal distribution with different correlation levels. Two other Bayesian variable selection methods, Binary indicator, and the combination of Binary indicator and Lasso were considered and compared as alternative methods. The simulation results indicated that the proposed two-stage Bayesian Lasso variable selection method has the best performance for both independent and dependent cases considered. When compared with the one-stage approach, and the other two alternative methods, the two-stage Bayesian Lasso approach provides the highest estimation accuracy in all scenarios considered.

Keywords: Lasso, Bayesian analysis, spatial analysis, variable selection

Procedia PDF Downloads 126
2036 Comparing Xbar Charts: Conventional versus Reweighted Robust Estimation Methods for Univariate Data Sets

Authors: Ece Cigdem Mutlu, Burak Alakent

Abstract:

Maintaining the quality of manufactured products at a desired level depends on the stability of process dispersion and location parameters and detection of perturbations in these parameters as promptly as possible. Shewhart control chart is the most widely used technique in statistical process monitoring to monitor the quality of products and control process mean and variability. In the application of Xbar control charts, sample standard deviation and sample mean are known to be the most efficient conventional estimators in determining process dispersion and location parameters, respectively, based on the assumption of independent and normally distributed datasets. On the other hand, there is no guarantee that the real-world data would be normally distributed. In the cases of estimated process parameters from Phase I data clouded with outliers, efficiency of traditional estimators is significantly reduced, and performance of Xbar charts are undesirably low, e.g. occasional outliers in the rational subgroups in Phase I data set may considerably affect the sample mean and standard deviation, resulting a serious delay in detection of inferior products in Phase II. For more efficient application of control charts, it is required to use robust estimators against contaminations, which may exist in Phase I. In the current study, we present a simple approach to construct robust Xbar control charts using average distance to the median, Qn-estimator of scale, M-estimator of scale with logistic psi-function in the estimation of process dispersion parameter, and Harrell-Davis qth quantile estimator, Hodge-Lehmann estimator and M-estimator of location with Huber psi-function and logistic psi-function in the estimation of process location parameter. Phase I efficiency of proposed estimators and Phase II performance of Xbar charts constructed from these estimators are compared with the conventional mean and standard deviation statistics both under normality and against diffuse-localized and symmetric-asymmetric contaminations using 50,000 Monte Carlo simulations on MATLAB. Consequently, it is found that robust estimators yield parameter estimates with higher efficiency against all types of contaminations, and Xbar charts constructed using robust estimators have higher power in detecting disturbances, compared to conventional methods. Additionally, utilizing individuals charts to screen outlier subgroups and employing different combination of dispersion and location estimators on subgroups and individual observations are found to improve the performance of Xbar charts.

Keywords: average run length, M-estimators, quality control, robust estimators

Procedia PDF Downloads 179
2035 Adaptive Motion Planning for 6-DOF Robots Based on Trigonometric Functions

Authors: Jincan Li, Mingyu Gao, Zhiwei He, Yuxiang Yang, Zhongfei Yu, Yuanyuan Liu

Abstract:

Building an appropriate motion model is crucial for trajectory planning of robots and determines the operational quality directly. An adaptive acceleration and deceleration motion planning based on trigonometric functions for the end-effector of 6-DOF robots in Cartesian coordinate system is proposed in this paper. This method not only achieves the smooth translation motion and rotation motion by constructing a continuous jerk model, but also automatically adjusts the parameters of trigonometric functions according to the variable inputs and the kinematic constraints. The results of computer simulation show that this method is correct and effective to achieve the adaptive motion planning for linear trajectories.

Keywords: kinematic constraints, motion planning, trigonometric function, 6-DOF robots

Procedia PDF Downloads 256
2034 Wind Resource Estimation and Economic Analysis for Rakiraki, Fiji

Authors: Kaushal Kishore

Abstract:

Immense amount of imported fuels are used in Fiji for electricity generation, transportation and for carrying out miscellaneous household work. To alleviate its dependency on fossil fuel, paramount importance has been given to instigate the utilization of renewable energy sources for power generation and to reduce the environmental dilapidation. Amongst the many renewable energy sources, wind has been considered as one of the best identified renewable sources that are comprehensively available in Fiji. In this study the wind resource assessment for three locations in Rakiraki, Fiji has been carried out. The wind resource estimation at Rokavukavu, Navolau and at Tuvavatu has been analyzed. The average wind speed at 55 m above ground level (a.g.l) at Rokavukavu, Navolau, and Tuvavatu sites are 5.91 m/s, 8.94 m/s and 8.13 m/s with the turbulence intensity of 14.9%, 17.1%, and 11.7% respectively. The moment fitting method has been used to estimate the Weibull parameter and the power density at each sites. A high resolution wind resource map for the three locations has been developed by using Wind Atlas Analysis and Application Program (WAsP). The results obtained from WAsP exhibited good wind potential at Navolau and Tuvavatu sites. A wind farm has been proposed at Navolau and Tuvavatu site that comprises six Vergnet 275 kW wind turbines at each site. The annual energy production (AEP) for each wind farm is estimated and an economic analysis is performed. The economic analysis for the proposed wind farms at Navolau and Tuvavatu sites showed a payback period of 5 and 6 years respectively.

Keywords: annual energy production, Rakiraki Fiji, turbulence intensity, Weibull parameter, wind speed, Wind Atlas Analysis and Application Program

Procedia PDF Downloads 175
2033 Artificial Neural Networks with Decision Trees for Diagnosis Issues

Authors: Y. Kourd, D. Lefebvre, N. Guersi

Abstract:

This paper presents a new idea for fault detection and isolation (FDI) technique which is applied to industrial system. This technique is based on Neural Networks fault-free and Faulty behaviors Models (NNFM's). NNFM's are used for residual generation, while decision tree architecture is used for residual evaluation. The decision tree is realized with data collected from the NNFM’s outputs and is used to isolate detectable faults depending on computed threshold. Each part of the tree corresponds to specific residual. With the decision tree, it becomes possible to take the appropriate decision regarding the actual process behavior by evaluating few numbers of residuals. In comparison to usual systematic evaluation of all residuals, the proposed technique requires less computational effort and can be used for on line diagnosis. An application example is presented to illustrate and confirm the effectiveness and the accuracy of the proposed approach.

Keywords: neural networks, decision trees, diagnosis, behaviors

Procedia PDF Downloads 485
2032 Tracing Sources of Sediment in an Arid River, Southern Iran

Authors: Hesam Gholami

Abstract:

Elevated suspended sediment loads in riverine systems resulting from accelerated erosion due to human activities are a serious threat to the sustainable management of watersheds and ecosystem services therein worldwide. Therefore, mitigation of deleterious sediment effects as a distributed or non-point pollution source in the catchments requires reliable provenance information. Sediment tracing or sediment fingerprinting, as a combined process consisting of sampling, laboratory measurements, different statistical tests, and the application of mixing or unmixing models, is a useful technique for discriminating the sources of sediments. From 1996 to the present, different aspects of this technique, such as grouping the sources (spatial and individual sources), discriminating the potential sources by different statistical techniques, and modification of mixing and unmixing models, have been introduced and modified by many researchers worldwide, and have been applied to identify the provenance of fine materials in agricultural, rural, mountainous, and coastal catchments, and in large catchments with numerous lakes and reservoirs. In the last two decades, efforts exploring the uncertainties associated with sediment fingerprinting results have attracted increasing attention. The frameworks used to quantify the uncertainty associated with fingerprinting estimates can be divided into three groups comprising Monte Carlo simulation, Bayesian approaches and generalized likelihood uncertainty estimation (GLUE). Given the above background, the primary goal of this study was to apply geochemical fingerprinting within the GLUE framework in the estimation of sub-basin spatial sediment source contributions in the arid Mehran River catchment in southern Iran, which drains into the Persian Gulf. The accuracy of GLUE predictions generated using four different sets of statistical tests for discriminating three sub-basin spatial sources was evaluated using 10 virtual sediments (VS) samples with known source contributions using the root mean square error (RMSE) and mean absolute error (MAE). Based on the results, the contributions modeled by GLUE for the western, central and eastern sub-basins are 1-42% (overall mean 20%), 0.5-30% (overall mean 12%) and 55-84% (overall mean 68%), respectively. According to the mean absolute fit (MAF; ≥ 95% for all target sediment samples) and goodness-of-fit (GOF; ≥ 99% for all samples), our suggested modeling approach is an accurate technique to quantify the source of sediments in the catchments. Overall, the estimated source proportions can help watershed engineers plan the targeting of conservation programs for soil and water resources.

Keywords: sediment source tracing, generalized likelihood uncertainty estimation, virtual sediment mixtures, Iran

Procedia PDF Downloads 65
2031 Networks, Regulations and Public Action: The Emerging Experiences of Sao Paulo

Authors: Lya Porto, Giulia Giacchè, Mario Aquino Alves

Abstract:

The paper aims to describe the linkage between government and civil society proposing a study on agro-ecological agriculture policy and urban action in São Paulo city underling the main achievements obtained. The negotiation processes between social movements and the government (inputs) and its results on political regulation and public action for Urban Agriculture (UA) in São Paulo city (outputs) have been investigated. The method adopted is qualitative, with techniques of semi-structured interviews, participant observation, and documental analysis. The authors conducted 30 semi-structured interviews with organic farmers, activists, governmental and non-governmental managers. Participant observation was conducted in public gardens, urban farms, public audiences, democratic councils, and social movements meetings. Finally, public plans and laws were also analyzed. São Paulo city with around 12 million inhabitants spread out in a 1522 km2 is the economic capital of Brazil, marked by spatial and socioeconomic segregation, currently aggravated by environmental crisis, characterized by water scarcity, pollution, and climate changes. In recent years, Urban Agriculture (UA) social movements gained strength and struggle for a different city with more green areas, organic food production, and public occupation. As the dynamics of UA occurs by the action of multiple actresses and institutions that struggle to build multiple senses on UA, the analysis will be based on literature about solidarity economy, governance, public action and networks. Those theories will mark out the analysis that will emphasize the approach of inter-subjectivity built between subjects, as well as the hybrid dynamics of multiple actors and spaces in the construction of policies for UA. Concerning UA we identified four main typologies based on land ownership, main function (economic or activist), form of organization of the space, and type of production (organic or not). The City Hall registers 500 productive unities of agriculture, with around 1500 producers, but researcher estimated a larger number of unities. Concerning the social movements we identified three categories that differ in goals and types of organization, but all of them work by networks of activists and/or organizations. The first category does not consider themselves as a movement, but a network. They occupy public spaces to grow organic food and to propose another type of social relations in the city. This action is similar to what became known as the green guerrillas. The second is configured as a movement that is structured to raise awareness about agro-ecological activities. The third one is a network of social movements, farmers, organizations and politicians that work focused on pressure and negotiation with executive and legislative government to approve regulations and policies on organic and agro-ecological Urban Agriculture. We conclude by highlighting how the interaction among institutions and civil society produced important achievements for recognition and implementation of UA within the city. Some results of this process are awareness for local production, legal and institutional recognition of the rural zone around the city into the planning tool, the investment on organic school public procurements, the establishment of participatory management of public squares, the inclusion of UA on Municipal Strategic Plan and Master Plan.

Keywords: public action, policies, agroecology, urban and peri-urban agriculture, Sao Paulo

Procedia PDF Downloads 280
2030 Feasiblity of Replacing Inductive Instrument Transformers with Non-Conventional Intrument Transformers to replace

Authors: David A. Wallace, Salakjit J. Nilboworn

Abstract:

Secure and reliable transmission and distribution of electrical power is crucial in today’s ever-increasing demand for electricity. Traditional methods of protecting the electrical grid have relied on relaying systems receiving voltage and current inputs from inductive instruments transformers (IT). This method has provided robust and stable performance throughout the years. Today with the advent of new non-conventional transformers (NCIT) and sensors, the electrical landscape is changing. These new systems have to ability to provide the same electrical performance as traditional instrument transformers with the added features of data acquisition, communication, smaller footprint, lower cost and resistance to GMD/GIC events.

Keywords: non-conventional instrument transformers, digital substations, smart grids, micro-grids

Procedia PDF Downloads 72
2029 Hierarchical Surface Inspired by Lotus-Leaf for Electrical Generators from Waterdrop

Authors: Jaewook Ha, Jin-beak Kim, Seongmin Kim

Abstract:

In order to solve global warming and climate change issues, increased efforts have been devoted towards clean and sustainable energy sources as well as new energy generating devices. Nanogenerator is a device that converts mechanical/thermal energy as produced by small-scale physical change into electricity. Here we propose that nature-leaf surface could be used for preparation of a triboelectric nanogenerator. The nature-leaf surface consists of polydimethylsiloxane microscale pillars and polytetrafluoroethylene nanoparticles. Interaction between the nature-leaf surface and water was studied and the electrical outputs from the motion of single water drop were measured. A 40-μL water drop can generate a peak voltage of 1 V and a peak current of 0.7 μA. This nanogenerator might be used to drive electric devices in the outdoor environments in a sustainable manner.

Keywords: hierarchical surface, lotus-leaf, electrical generator, waterdrop

Procedia PDF Downloads 281
2028 Cellular Traffic Prediction through Multi-Layer Hybrid Network

Authors: Supriya H. S., Chandrakala B. M.

Abstract:

Deep learning based models have been recently successful adoption for network traffic prediction. However, training a deep learning model for various prediction tasks is considered one of the critical tasks due to various reasons. This research work develops Multi-Layer Hybrid Network (MLHN) for network traffic prediction and analysis; MLHN comprises the three distinctive networks for handling the different inputs for custom feature extraction. Furthermore, an optimized and efficient parameter-tuning algorithm is introduced to enhance parameter learning. MLHN is evaluated considering the “Big Data Challenge” dataset considering the Mean Absolute Error, Root Mean Square Error and R^2as metrics; furthermore, MLHN efficiency is proved through comparison with a state-of-art approach.

Keywords: MLHN, network traffic prediction

Procedia PDF Downloads 73
2027 Estimation of Constant Coefficients of Bourgoyne and Young Drilling Rate Model for Drill Bit Wear Prediction

Authors: Ahmed Z. Mazen, Nejat Rahmanian, Iqbal Mujtaba, Ali Hassanpour

Abstract:

In oil and gas well drilling, the drill bit is an important part of the Bottom Hole Assembly (BHA), which is installed and designed to drill and produce a hole by several mechanisms. The efficiency of the bit depends on many drilling parameters such as weight on bit, rotary speed, and mud properties. When the bit is pulled out of the hole, the evaluation of the bit damage must be recorded very carefully to guide engineers in order to select the bits for further planned wells. Having a worn bit for hole drilling may cause severe damage to bit leading to cutter or cone losses in the bottom of hole, where a fishing job will have to take place, and all of these will increase the operating cost. The main factor to reduce the cost of drilling operation is to maximize the rate of penetration by analyzing real-time data to predict the drill bit wear while drilling. There are numerous models in the literature for prediction of the rate of penetration based on drilling parameters, mostly based on empirical approaches. One of the most commonly used approaches is Bourgoyne and Young model, where the rate of penetration can be estimated by the drilling parameters as well as a wear index using an empirical correlation, provided all the constants and coefficients are accurately determined. This paper introduces a new methodology to estimate the eight coefficients for Bourgoyne and Young model using the gPROMS parameters estimation GPE (Version 4.2.0). Real data collected form similar formations (12 ¼’ sections) in two different fields in Libya are used to estimate the coefficients. The estimated coefficients are then used in the equations and applied to nearby wells in the same field to predict the bit wear.

Keywords: Bourgoyne and Young model, bit wear, gPROMS, rate of penetration

Procedia PDF Downloads 144
2026 Heavy Metal Contamination in Sediments of North East Coast of Tamilnadu by EDXRF Technique

Authors: R. Ravisankar, Tholkappian A. Chandrasekaran, Y. Raghu, K. K. Satapathy, M. V. R. Prasad, K. V. Kanagasabapathy

Abstract:

The coastal areas of Tamilnadu are assuming greater importance owing to increasing human population, urbanization and accelerated industrial activities. sIn the present study, sediment samples are collected along the east coast of Tamilnadu for assessment of heavy metal pollution. The concentration of 13 selected heavy metals such as Mg, Al, Si, K, Ca, Ti, Fe, V, Cr, Mn, Co, Ni and Zn determined by Energy dispersive X-ray fluorescence (EDXRF) technique. In order to describe the pollution status, Contamination factor and pollution load index are calculated and reported. This result suggests that sources of metal contamination were mainly attributed to natural inputs from surrounding environments.

Keywords: sediments, heavy metals, EDXRF, pollution contamination factors

Procedia PDF Downloads 322
2025 Estimation and Removal of Chlorophenolic Compounds from Paper Mill Waste Water by Electrochemical Treatment

Authors: R. Sharma, S. Kumar, C. Sharma

Abstract:

A number of toxic chlorophenolic compounds are formed during pulp bleaching. The nature and concentration of these chlorophenolic compounds largely depends upon the amount and nature of bleaching chemicals used. These compounds are highly recalcitrant and difficult to remove but are partially removed by the biochemical treatment processes adopted by the paper industry. Identification and estimation of these chlorophenolic compounds has been carried out in the primary and secondary clarified effluents from the paper mill by GCMS. Twenty-six chorophenolic compounds have been identified and estimated in paper mill waste waters. Electrochemical treatment is an efficient method for oxidation of pollutants and has successfully been used to treat textile and oil waste water. Electrochemical treatment using less expensive anode material, stainless steel electrodes has been tried to study their removal. The electrochemical assembly comprised a DC power supply, a magnetic stirrer and stainless steel (316 L) electrode. The optimization of operating conditions has been carried out and treatment has been performed under optimized treatment conditions. Results indicate that 68.7% and 83.8% of cholorphenolic compounds are removed during 2 h of electrochemical treatment from primary and secondary clarified effluent respectively. Further, there is a reduction of 65.1, 60 and 92.6% of COD, AOX and color, respectively for primary clarified and 83.8%, 75.9% and 96.8% of COD, AOX and color, respectively for secondary clarified effluent. EC treatment has also been found to increase significantly the biodegradability index of wastewater because of conversion of non- biodegradable fraction into biodegradable fraction. Thus, electrochemical treatment is an efficient method for the degradation of cholorophenolic compounds, removal of color, AOX and other recalcitrant organic matter present in paper mill waste water.

Keywords: chlorophenolics, effluent, electrochemical treatment, wastewater

Procedia PDF Downloads 374
2024 Prediction of Coronary Heart Disease Using Fuzzy Logic

Authors: Elda Maraj, Shkelqim Kuka

Abstract:

Coronary heart disease causes many deaths in the world. Unfortunately, this problem will continue to increase in the future. In this paper, a fuzzy logic model to predict coronary heart disease is presented. This model has been developed with seven input variables and one output variable that was implemented for 30 patients in Albania. Here fuzzy logic toolbox of MATLAB is used. Fuzzy model inputs are considered as cholesterol, blood pressure, physical activity, age, BMI, smoking, and diabetes, whereas the output is the disease classification. The fuzzy sets and membership functions are chosen in an appropriate manner. Centroid method is used for defuzzification. The database is taken from University Hospital Center "Mother Teresa" in Tirana, Albania.

Keywords: coronary heart disease, fuzzy logic toolbox, membership function, prediction model

Procedia PDF Downloads 142
2023 Government Size and Economic Growth: Testing the Non-Linear Hypothesis for Nigeria

Authors: R. Santos Alimi

Abstract:

Using time-series techniques, this study empirically tested the validity of existing theory which stipulates there is a nonlinear relationship between government size and economic growth; such that government spending is growth-enhancing at low levels but growth-retarding at high levels, with the optimal size occurring somewhere in between. This study employed three estimation equations. First, for the size of government, two measures are considered as follows: (i) share of total expenditures to gross domestic product, (ii) share of recurrent expenditures to gross domestic product. Second, the study adopted real GDP (without government expenditure component), as a variant measure of economic growth other than the real total GDP, in estimating the optimal level of government expenditure. The study is based on annual Nigeria country-level data for the period 1970 to 2012. Estimation results show that the inverted U-shaped curve exists for the two measures of government size and the estimated optimum shares are 19.81% and 10.98%, respectively. Finally, with the adoption of real GDP (without government expenditure component), the optimum government size was found to be 12.58% of GDP. Our analysis shows that the actual share of government spending on average (2000 - 2012) is about 13.4%.This study adds to the literature confirming that the optimal government size exists not only for developed economies but also for developing economy like Nigeria. Thus, a public intervention threshold level that fosters economic growth is a reality; beyond this point economic growth should be left in the hands of the private sector. This finding has a significant implication for the appraisal of government spending and budgetary policy design.

Keywords: public expenditure, economic growth, optimum level, fully modified OLS

Procedia PDF Downloads 407
2022 Estimation of Endogenous Brain Noise from Brain Response to Flickering Visual Stimulation Magnetoencephalography Visual Perception Speed

Authors: Alexander N. Pisarchik, Parth Chholak

Abstract:

Intrinsic brain noise was estimated via magneto-encephalograms (MEG) recorded during perception of flickering visual stimuli with frequencies of 6.67 and 8.57 Hz. First, we measured the mean phase difference between the flicker signal and steady-state event-related field (SSERF) in the occipital area where the brain response at the flicker frequencies and their harmonics appeared in the power spectrum. Then, we calculated the probability distribution of the phase fluctuations in the regions of frequency locking and computed its kurtosis. Since kurtosis is a measure of the distribution’s sharpness, we suppose that inverse kurtosis is related to intrinsic brain noise. In our experiments, the kurtosis value varied among subjects from K = 3 to K = 5 for 6.67 Hz and from 2.6 to 4 for 8.57 Hz. The majority of subjects demonstrated leptokurtic kurtosis (K < 3), i.e., the distribution tails approached zero more slowly than Gaussian. In addition, we found a strong correlation between kurtosis and brain complexity measured as the correlation dimension, so that the MEGs of subjects with higher kurtosis exhibited lower complexity. The obtained results are discussed in the framework of nonlinear dynamics and complex network theories. Specifically, in a network of coupled oscillators, phase synchronization is mainly determined by two antagonistic factors, noise, and the coupling strength. While noise worsens phase synchronization, the coupling improves it. If we assume that each neuron and each synapse contribute to brain noise, the larger neuronal network should have stronger noise, and therefore phase synchronization should be worse, that results in smaller kurtosis. The described method for brain noise estimation can be useful for diagnostics of some brain pathologies associated with abnormal brain noise.

Keywords: brain, flickering, magnetoencephalography, MEG, visual perception, perception time

Procedia PDF Downloads 133
2021 An Ensemble-based Method for Vehicle Color Recognition

Authors: Saeedeh Barzegar Khalilsaraei, Manoocheher Kelarestaghi, Farshad Eshghi

Abstract:

The vehicle color, as a prominent and stable feature, helps to identify a vehicle more accurately. As a result, vehicle color recognition is of great importance in intelligent transportation systems. Unlike conventional methods which use only a single Convolutional Neural Network (CNN) for feature extraction or classification, in this paper, four CNNs, with different architectures well-performing in different classes, are trained to extract various features from the input image. To take advantage of the distinct capability of each network, the multiple outputs are combined using a stack generalization algorithm as an ensemble technique. As a result, the final model performs better than each CNN individually in vehicle color identification. The evaluation results in terms of overall average accuracy and accuracy variance show the proposed method’s outperformance compared to the state-of-the-art rivals.

Keywords: Vehicle Color Recognition, Ensemble Algorithm, Stack Generalization, Convolutional Neural Network

Procedia PDF Downloads 67
2020 Aligning Organizational Culture and Compensation Strategies

Authors: Giuseppe Maria Russo, Patrícia Amélia Tomei, Antônio Linhares, André Moreira Santos

Abstract:

Alignment between management strategies, policies and practices with organizational cultures holds great potential to meet the challenges of retaining professionals and maintaining their commitment. In this article, authors consider that when it is aligned with company strategy, compensation acts as an incentive for developing common visions within the organizational culture. This article verified the correlation between types of culture and compensation’s strategic components and provided inputs for the definition of strategies aligned with cultural typologies. We conclude that the impact of compensation variables varies according to the type of organizational culture. This result reinforces the theory that different cultures define different organizational strategies. Thus, compensation strategies may explain types of organizational culture.

Keywords: compensation, Handy’s cultural typology, organizational culture, rewards

Procedia PDF Downloads 648
2019 The Role of Demographics and Service Quality in the Adoption and Diffusion of E-Government Services: A Study in India

Authors: Sayantan Khanra, Rojers P. Joseph

Abstract:

Background and Significance: This study is aimed at analyzing the role of demographic and service quality variables in the adoption and diffusion of e-government services among the users in India. The study proposes to examine the users' perception about e-Government services and investigate the key variables that are most salient to the Indian populace. Description of the Basic Methodologies: The methodology to be adopted in this study is Hierarchical Regression Analysis, which will help in exploring the impact of the demographic variables and the quality dimensions on the willingness to use e-government services in two steps. First, the impact of demographic variables on the willingness to use e-government services is to be examined. In the second step, quality dimensions would be used as inputs to the model for explaining variance in excess of prior contribution by the demographic variables. Present Status: Our study is in the data collection stage in collaboration with a highly reliable, authentic and adequate source of user data. Assuming that the population of the study comprises all the Internet users in India, a massive sample size of more than 10,000 random respondents is being approached. Data is being collected using an online survey questionnaire. A pilot survey has already been carried out to refine the questionnaire with inputs from an expert in management information systems and a small group of users of e-government services in India. The first three questions in the survey pertain to the Internet usage pattern of a respondent and probe whether the person has used e-government services. If the respondent confirms that he/she has used e-government services, then an aggregate of 15 indicators are used to measure the quality dimensions under consideration and the willingness of the respondent to use e-government services, on a five-point Likert scale. If the respondent reports that he/she has not used e-government services, then a few optional questions are asked to understand the reason(s) behind the same. Last four questions in the survey are dedicated to collect data related to the demographic variables. An indication of the Major Findings: Based on the extensive literature review carried out to develop several propositions; a research model is prescribed to start with. A major outcome expected at the completion of the study is the development of a research model that would help to understand the relationship involving the demographic variables and service quality dimensions, and the willingness to adopt e-government services, particularly in an emerging economy like India. Concluding Statement: Governments of emerging economies and other relevant agencies can use the findings from the study in designing, updating, and promoting e-government services to enhance public participation, which in turn, would help to improve efficiency, convenience, engagement, and transparency in implementing these services.

Keywords: adoption and diffusion of e-government services, demographic variables, hierarchical regression analysis, service quality dimensions

Procedia PDF Downloads 254
2018 Open Science Philosophy, Research and Innovation

Authors: C.Ardil

Abstract:

Open Science translates the understanding and application of various theories and practices in open science philosophy, systems, paradigms and epistemology. Open Science originates with the premise that universal scientific knowledge is a product of a collective scholarly and social collaboration involving all stakeholders and knowledge belongs to the global society. Scientific outputs generated by public research are a public good that should be available to all at no cost and without barriers or restrictions. Open Science has the potential to increase the quality, impact and benefits of science and to accelerate advancement of knowledge by making it more reliable, more efficient and accurate, better understandable by society and responsive to societal challenges, and has the potential to enable growth and innovation through reuse of scientific results by all stakeholders at all levels of society, and ultimately contribute to growth and competitiveness of global society. Open Science is a global movement to improve accessibility to and reusability of research practices and outputs. In its broadest definition, it encompasses open access to publications, open research data and methods, open source, open educational resources, open evaluation, and citizen science. The implementation of open science provides an excellent opportunity to renegotiate the social roles and responsibilities of publicly funded research and to rethink the science system as a whole. Open Science is the practice of science in such a way that others can collaborate and contribute, where research data, lab notes and other research processes are freely available, under terms that enable reuse, redistribution and reproduction of the research and its underlying data and methods. Open Science represents a novel systematic approach to the scientific process, shifting from the standard practices of publishing research results in scientific publications towards sharing and using all available knowledge at an earlier stage in the research process, based on cooperative work and diffusing scholarly knowledge with no barriers and restrictions. Open Science refers to efforts to make the primary outputs of publicly funded research results (publications and the research data) publicly accessible in digital format with no limitations. Open Science is about extending the principles of openness to the whole research cycle, fostering, sharing and collaboration as early as possible, thus entailing a systemic change to the way science and research is done. Open Science is the ongoing transition in how open research is carried out, disseminated, deployed, and transformed to make scholarly research more open, global, collaborative, creative and closer to society. Open Science involves various movements aiming to remove the barriers for sharing any kind of output, resources, methods or tools, at any stage of the research process. Open Science embraces open access to publications, research data, source software, collaboration, peer review, notebooks, educational resources, monographs, citizen science, or research crowdfunding. The recognition and adoption of open science practices, including open science policies that increase open access to scientific literature and encourage data and code sharing, is increasing in the open science philosophy. Revolutionary open science policies are motivated by ethical, moral or utilitarian arguments, such as the right to access digital research literature for open source research or science data accumulation, research indicators, transparency in the field of academic practice, and reproducibility. Open science philosophy is adopted primarily to demonstrate the benefits of open science practices. Researchers use open science applications for their own advantage in order to get more offers, increase citations, attract media attention, potential collaborators, career opportunities, donations and funding opportunities. In open science philosophy, open data findings are evidence that open science practices provide significant benefits to researchers in scientific research creation, collaboration, communication, and evaluation according to more traditional closed science practices. Open science considers concerns such as the rigor of peer review, common research facts such as financing and career development, and the sacrifice of author rights. Therefore, researchers are recommended to implement open science research within the framework of existing academic evaluation and incentives. As a result, open science research issues are addressed in the areas of publishing, financing, collaboration, resource management and sharing, career development, discussion of open science questions and conclusions.

Keywords: Open Science, Open Science Philosophy, Open Science Research, Open Science Data

Procedia PDF Downloads 118
2017 Control Scheme for Single-Stage Boost Inverter for Grid-Connected Photovoltaic

Authors: Mohammad Reza Ebrahimi, Behnaz Mahdaviani

Abstract:

Increasing renewable sources such photovoltaic are the reason of environmental pollution. Because photovoltaic generates power in low voltage, first, generated power should increase. Usually, distributed generation injects their power to AC-Grid, hence after voltage increasing an inverter is needed to convert DC power to AC power. This results in utilization two series converter that grows cost, complexity, and low efficiency. In this paper a single stage inverter is utilized to boost and invert in one stage. Control of this scheme is easier, and its initial cost decreases comparing to conventional double stage inverters. A simple control scheme is used to control active power as well as minimum total harmonic distortion (THD) in injected current. Simulations in MATLAB demonstrate better outputs comparing with conventional approaches.

Keywords: maximum power point tracking, boost inverter, control strategy, three phase inverter

Procedia PDF Downloads 355
2016 Statistical Assessment of Models for Determination of Soil–Water Characteristic Curves of Sand Soils

Authors: S. J. Matlan, M. Mukhlisin, M. R. Taha

Abstract:

Characterization of the engineering behavior of unsaturated soil is dependent on the soil-water characteristic curve (SWCC), a graphical representation of the relationship between water content or degree of saturation and soil suction. A reasonable description of the SWCC is thus important for the accurate prediction of unsaturated soil parameters. The measurement procedures for determining the SWCC, however, are difficult, expensive, and time-consuming. During the past few decades, researchers have laid a major focus on developing empirical equations for predicting the SWCC, with a large number of empirical models suggested. One of the most crucial questions is how precisely existing equations can represent the SWCC. As different models have different ranges of capability, it is essential to evaluate the precision of the SWCC models used for each particular soil type for better SWCC estimation. It is expected that better estimation of SWCC would be achieved via a thorough statistical analysis of its distribution within a particular soil class. With this in view, a statistical analysis was conducted in order to evaluate the reliability of the SWCC prediction models against laboratory measurement. Optimization techniques were used to obtain the best-fit of the model parameters in four forms of SWCC equation, using laboratory data for relatively coarse-textured (i.e., sandy) soil. The four most prominent SWCCs were evaluated and computed for each sample. The result shows that the Brooks and Corey model is the most consistent in describing the SWCC for sand soil type. The Brooks and Corey model prediction also exhibit compatibility with samples ranging from low to high soil water content in which subjected to the samples that evaluated in this study.

Keywords: soil-water characteristic curve (SWCC), statistical analysis, unsaturated soil, geotechnical engineering

Procedia PDF Downloads 326
2015 Don't Just Guess and Slip: Estimating Bayesian Knowledge Tracing Parameters When Observations Are Scant

Authors: Michael Smalenberger

Abstract:

Intelligent tutoring systems (ITS) are computer-based platforms which can incorporate artificial intelligence to provide step-by-step guidance as students practice problem-solving skills. ITS can replicate and even exceed some benefits of one-on-one tutoring, foster transactivity in collaborative environments, and lead to substantial learning gains when used to supplement the instruction of a teacher or when used as the sole method of instruction. A common facet of many ITS is their use of Bayesian Knowledge Tracing (BKT) to estimate parameters necessary for the implementation of the artificial intelligence component, and for the probability of mastery of a knowledge component relevant to the ITS. While various techniques exist to estimate these parameters and probability of mastery, none directly and reliably ask the user to self-assess these. In this study, 111 undergraduate students used an ITS in a college-level introductory statistics course for which detailed transaction-level observations were recorded, and users were also routinely asked direct questions that would lead to such a self-assessment. Comparisons were made between these self-assessed values and those obtained using commonly used estimation techniques. Our findings show that such self-assessments are particularly relevant at the early stages of ITS usage while transaction level data are scant. Once a user’s transaction level data become available after sufficient ITS usage, these can replace the self-assessments in order to eliminate the identifiability problem in BKT. We discuss how these findings are relevant to the number of exercises necessary to lead to mastery of a knowledge component, the associated implications on learning curves, and its relevance to instruction time.

Keywords: Bayesian Knowledge Tracing, Intelligent Tutoring System, in vivo study, parameter estimation

Procedia PDF Downloads 163
2014 Improving Data Completeness and Timely Reporting: A Joint Collaborative Effort between Partners in Health and Ministry of Health in Remote Areas, Neno District, Malawi

Authors: Wiseman Emmanuel Nkhomah, Chiyembekezo Kachimanga, Moses Banda Aron, Julia Higgins, Manuel Mulwafu, Kondwani Mpinga, Mwayi Chunga, Grace Momba, Enock Ndarama, Dickson Sumphi, Atupere Phiri, Fabien Munyaneza

Abstract:

Background: Data is key to supporting health service delivery as stakeholders, including NGOs rely on it for effective service delivery, decision-making, and system strengthening. Several studies generated debate on data quality from national health management information systems (HMIS) in sub-Saharan Africa. This limits the utilization of data in resource-limited settings, which already struggle to meet standards set by the World Health Organization (WHO). We aimed to evaluate data quality improvement of Neno district HMIS over a 4-year period (2018 – 2021) following quarterly data reviews introduced in January 2020 by the district health management team and Partners In Health. Methods: Exploratory Mixed Research was used to examine report rates, followed by in-depth interviews using Key Informant Interviews (KIIs) and Focus Group Discussions (FGDs). We used the WHO module desk review to assess the quality of HMIS data in the Neno district captured from 2018 to 2021. The metrics assessed included the completeness and timeliness of 34 reports. Completeness was measured as a percentage of non-missing reports. Timeliness was measured as the span between data inputs and expected outputs meeting needs. We computed T-Test and recorded P-values, summaries, and percentage changes using R and Excel 2016. We analyzed demographics for key informant interviews in Power BI. We developed themes from 7 FGDs and 11 KIIs using Dedoose software, from which we picked perceptions of healthcare workers, interventions implemented, and improvement suggestions. The study was reviewed and approved by Malawi National Health Science Research Committee (IRB: 22/02/2866). Results: Overall, the average reporting completeness rate was 83.4% (before) and 98.1% (after), while timeliness was 68.1% and 76.4 respectively. Completeness of reports increased over time: 2018, 78.8%; 2019, 88%; 2020, 96.3% and 2021, 99.9% (p< 0.004). The trend for timeliness has been declining except in 2021, where it improved: 2018, 68.4%; 2019, 68.3%; 2020, 67.1% and 2021, 81% (p< 0.279). Comparing 2021 reporting rates to the mean of three preceding years, both completeness increased from 88% to 99% (in 2021), while timeliness increased from 68% to 81%. Sixty-five percent of reports have maintained meeting a national standard of 90%+ in completeness while only 24% in timeliness. Thirty-two percent of reports met the national standard. Only 9% improved on both completeness and timeliness, and these are; cervical cancer, nutrition care support and treatment, and youth-friendly health services reports. 50% of reports did not improve to standard in timeliness, and only one did not in completeness. On the other hand, factors associated with improvement included improved communications and reminders using internal communication, data quality assessments, checks, and reviews. Decentralizing data entry at the facility level was suggested to improve timeliness. Conclusion: Findings suggest that data quality in HMIS for the district has improved following collaborative efforts. We recommend maintaining such initiatives to identify remaining quality gaps and that results be shared publicly to support increased use of data. These results can inform Ministry of Health and its partners on some interventions and advise initiatives for improving its quality.

Keywords: data quality, data utilization, HMIS, collaboration, completeness, timeliness, decision-making

Procedia PDF Downloads 71