Search results for: radial basis function neural networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11426

Search results for: radial basis function neural networks

2246 SIRT1 Gene Polymorphisms and Its Protein Level in Colorectal Cancer

Authors: Olfat Shaker, Miriam Wadie, Reham Ali, Ayman Yosry

Abstract:

Colorectal cancer (CRC) is a major cause of mortality and morbidity and accounts for over 9% of cancer incidence worldwide. Silent information regulator 2 homolog 1 (SIRT1) gene is located in the nucleus and exert its effects via modulation of histone and non-histone targets. They function in the cell via histone deacetylase (HDAC) and/or adenosine diphosphate ribosyl transferase (ADPRT) enzymatic activity. The aim of this work was to study the relationship between SIRT1 polymorphism and its protein level in colorectal cancer patients in comparison to control cases. This study includes 2 groups: thirty healthy subjects (control group) & one hundred CRC patients. All subjects were subjected to: SIRT-1 serum level was measured by ELISA and gene polymorphisms of rs12778366, rs375891 and rs3740051 were detected by real time PCR. For CRC patients clinical data were collected (size, site of tumor as well as its grading, obesity) CRC patients showed high significant increase in the mean level of serum SIRT-1 compared to control group (P<0.001). Mean serum level of SIRT-1 showed high significant increase in patients with tumor size ≥5 compared to the size < 5 cm (P<0.05). In CRC patients, percentage of T allele of rs12778366 was significantly lower than controls, CC genotype and C allele C of rs 375891 were significantly higher than control group. In CRC patients, the CC genotype of rs12778366, was 75% in rectosigmoid and 25% in cecum & ascending colon. According to tumor size, the percentage of CC genotype was 87.5% in tumor size ≥5 cm. Conclusion: serum level of SIRT-1 and T allele, C allele of rs12778366 and rs 375891 respectively can be used as diagnostic markers for CRC patients.

Keywords: CRC, SIRT1, polymorphisms, ELISA

Procedia PDF Downloads 206
2245 Human Vibrotactile Discrimination Thresholds for Simultaneous and Sequential Stimuli

Authors: Joanna Maj

Abstract:

Body machine interfaces (BMIs) afford users a non-invasive way coordinate movement. Vibrotactile stimulation has been incorporated into BMIs to allow feedback in real-time and guide movement control to benefit patients with cognitive deficits, such as stroke survivors. To advance research in this area, we examined vibrational discrimination thresholds at four body locations to determine suitable application sites for future multi-channel BMIs using vibration cues to guide movement planning and control. Twelve healthy adults had a pair of small vibrators (tactors) affixed to the skin at each location: forearm, shoulders, torso, and knee. A "standard" stimulus (186 Hz; 750 ms) and "probe" stimuli (11 levels ranging from 100 Hz to 235 Hz; 750 ms) were delivered. Probe and test stimulus pairs could occur sequentially or simultaneously (timing). Participants verbally indicated which stimulus felt more intense. Stimulus order was counterbalanced across tactors and body locations. Probabilities that probe stimuli felt more intense than the standard stimulus were computed and fit with a cumulative Gaussian function; the discrimination threshold was defined as one standard deviation of the underlying distribution. Threshold magnitudes depended on stimulus timing and location. Discrimination thresholds were better for stimuli applied sequentially vs. simultaneously at the torso as well as the knee. Thresholds were small (better) and relatively insensitive to timing differences for vibrations applied at the shoulder. BMI applications requiring multiple channels of simultaneous vibrotactile stimulation should therefore consider the shoulder as a deployment site for a vibrotactile BMI interface.

Keywords: electromyography, electromyogram, neuromuscular disorders, biomedical instrumentation, controls engineering

Procedia PDF Downloads 57
2244 Optimal Operation of Bakhtiari and Roudbar Dam Using Differential Evolution Algorithms

Authors: Ramin Mansouri

Abstract:

Due to the contrast of rivers discharge regime with water demands, one of the best ways to use water resources is to regulate the natural flow of the rivers and supplying water needs to construct dams. Optimal utilization of reservoirs, consideration of multiple important goals together at the same is of very high importance. To study about analyzing this method, statistical data of Bakhtiari and Roudbar dam over 46 years (1955 until 2001) is used. Initially an appropriate objective function was specified and using DE algorithm, the rule curve was developed. In continue, operation policy using rule curves was compared to standard comparative operation policy. The proposed method distributed the lack to the whole year and lowest damage was inflicted to the system. The standard deviation of monthly shortfall of each year with the proposed algorithm was less deviated than the other two methods. The Results show that median values for the coefficients of F and Cr provide the optimum situation and cause DE algorithm not to be trapped in local optimum. The most optimal answer for coefficients are 0.6 and 0.5 for F and Cr coefficients, respectively. After finding the best combination of coefficients values F and CR, algorithms for solving the independent populations were examined. For this purpose, the population of 4, 25, 50, 100, 500 and 1000 members were studied in two generations (G=50 and 100). result indicates that the generation number 200 is suitable for optimizing. The increase in time per the number of population has almost a linear trend, which indicates the effect of population in the runtime algorithm. Hence specifying suitable population to obtain an optimal results is very important. Standard operation policy had better reversibility percentage, but inflicts severe vulnerability to the system. The results obtained in years of low rainfall had very good results compared to other comparative methods.

Keywords: reservoirs, differential evolution, dam, Optimal operation

Procedia PDF Downloads 68
2243 Photocaged Carbohydrates: Versatile Tools for Biotechnological Applications

Authors: Claus Bier, Dennis Binder, Alexander Gruenberger, Dagmar Drobietz, Dietrich Kohlheyer, Anita Loeschcke, Karl Erich Jaeger, Thomas Drepper, Joerg Pietruszka

Abstract:

Light absorbing chromophoric systems are important optogenetic tools for biotechnical and biophysical investigations. Processes such as fluorescence or photolysis can be triggered by light-absorption of chromophores. These play a central role in life science. Photocaged compounds belong to such chromophoric systems. The photo-labile protecting groups enable them to release biologically active substances with high temporal and spatial resolution. The properties of photocaged compounds are specified by the characteristics of the caging group as well as the characteristics of the linked effector molecule. In our research, we work with different types of photo-labile protecting groups and various effector molecules giving us possible access to a large library of caged compounds. As a function of the caged effector molecule, a nearly limitless number of biological systems can be directed. Our main interest focusses on photocaging carbohydrates (e.g. arabinose) and their derivatives as effector molecules. Based on these resulting photocaged compounds a precisely controlled photoinduced gene expression will give us access to studies of numerous biotechnological and synthetic biological applications. It could be shown, that the regulation of gene expression via light is possible with photocaged carbohydrates achieving a higher-order control over this processes. With the one-step cleavable photocaged carbohydrate, a homogeneous expression was achieved in comparison to free carbohydrates.

Keywords: bacterial gene expression, biotechnology, caged compounds, carbohydrates, optogenetics, photo-removable protecting group

Procedia PDF Downloads 213
2242 Streamflow Modeling Using the PyTOPKAPI Model with Remotely Sensed Rainfall Data: A Case Study of Gilgel Ghibe Catchment, Ethiopia

Authors: Zeinu Ahmed Rabba, Derek D Stretch

Abstract:

Remote sensing contributes valuable information to streamflow estimates. Usually, stream flow is directly measured through ground-based hydrological monitoring station. However, in many developing countries like Ethiopia, ground-based hydrological monitoring networks are either sparse or nonexistent, which limits the manage water resources and hampers early flood-warning systems. In such cases, satellite remote sensing is an alternative means to acquire such information. This paper discusses the application of remotely sensed rainfall data for streamflow modeling in Gilgel Ghibe basin in Ethiopia. Ten years (2001-2010) of two satellite-based precipitation products (SBPP), TRMM and WaterBase, were used. These products were combined with the PyTOPKAPI hydrological model to generate daily stream flows. The results were compared with streamflow observations at Gilgel Ghibe Nr, Assendabo gauging station using four statistical tools (Bias, R², NS and RMSE). The statistical analysis indicates that the bias-adjusted SBPPs agree well with gauged rainfall compared to bias-unadjusted ones. The SBPPs with no bias-adjustment tend to overestimate (high Bias and high RMSE) the extreme precipitation events and the corresponding simulated streamflow outputs, particularly during wet months (June-September) and underestimate the streamflow prediction over few dry months (January and February). This shows that bias-adjustment can be important for improving the performance of the SBPPs in streamflow forecasting. We further conclude that the general streamflow patterns were well captured at daily time scales when using SBPPs after bias adjustment. However, the overall results demonstrate that the simulated streamflow using the gauged rainfall is superior to those obtained from remotely sensed rainfall products including bias-adjusted ones.

Keywords: Ethiopia, PyTOPKAPI model, remote sensing, streamflow, Tropical Rainfall Measuring Mission (TRMM), waterBase

Procedia PDF Downloads 269
2241 Concentration Conditions of Industrially Valuable Accumulations of Gold Ore Mineralization of the Tulallar Ore-Bearing Structure

Authors: Narmina Ismayilova, Shamil Zabitov, Fuad Askerzadeh, Raqif Seyfullayev

Abstract:

Tulallar volcano-tectonic structure is located in the conjugation zone of the Gekgel horst-uplift, Dashkesan, and Agzhakend synclinorium. Regionally, these geological structures are an integral part of the Lok-Karabakh island arc system. Tulallar field is represented by three areas (Central, East, West). The area of the ore field is located within a partially eroded oblong volcano-tectonic depression. In the central part, the core is divided by the deep Tulallar-Chiragdara-Toganalinsky fault with arcuate fragments of the ring structure into three blocks -East, Central, and West, within which the same areas of the Tulallar field are located. In general, for the deposit, the position of both ore-bearing vein zones and ore-bearing blocks is controlled by fractures of two systems - sub-latitudinal and near-meridional orientations. Mineralization of gold-sulfide ores is confined to these zones of disturbances. The zones have a northwestern and northeastern (near-meridian) strike with a steep dip (70-85◦) to the southwest and southeast. The average thickness of the zones is 35 m; they are traced for 2.5 km along the strike and 500 m along with the dip. In general, for the indicated thickness, the zones contain an average of 1.56 ppm Au; however, areas enriched in noble metal are distinguished within them. The zones are complicated by postore fault tectonics. Gold mineralization is localized in the Kimmeridgian volcanics of andesi-basalt-porphyritic composition and their vitrolithoclastic, agglomerate tuffs, and tuff breccias. For the central part of the Tulallar ore field, a map of geochemical anomalies was built on the basis of analysis data carried out in an international laboratory. The total gold content ranges from 0.1-5 g/t, and in some places, even more than 5 g/t. The highest gold content is observed in the monoquartz facies among the secondary quartzites with quartz veins. The smallest amount of gold content appeared in the quartz-kaolin facies. And also, anomalous values of gold content are located in the upper part of the quartz vein. As a result, an en-echelon arrangement of anomalous values of gold along the strike and dip was revealed.

Keywords: geochemical anomaly, gold deposit, mineralization, Tulallar

Procedia PDF Downloads 181
2240 Innovation in the Provision of Medical Services in the Field of Qualified Sports and Services Related to the Therapy of Metabolism Disorders and the Treatment of Obesity

Authors: Jerzy Slowik, Elzbieta Grochowska-Niedworok

Abstract:

The analysis of the market needs and trends in both treatment and prophylaxis shows the growing need to implement comprehensive solutions that would enable safe contact of the beneficiaries with the therapeutic and diagnostic support group. Based on the evaluation of the medical and sports industry services market, projects co-financed by the EFRR in the form of comprehensive care systems using IT tools for patients under treatment in the field of obesity and metabolism using the system were implemented under the Regional Operational Program of the Silesian Voivodeship for 2014-2020. SFAO 1.0 (Support for the Fight Against Obesity) number of the WND-RPSL project. 01.02.00-24-06EA / 16) as well as for competitors in qualified sports SK system (qualified sports) project number WND-RPSL. 01.02.00-24-0630 / 17-002. The service provided in accordance with SFAO 1.0 has shown a wide range of therapy possibilities - from monitoring the body's reactions during sports activities of healthy people to remote care for sick patients. As a result of the introduction of an innovative service, it was possible to increase the effectiveness of the therapy, which was manifested in the reduction of the starting doses of drugs by 10%, improvement of the efficiency of the respiratory and blood circulation system, and a 10% increase in bone density. Innovation in the provision of medical services in the field of qualified sports SK was a response to the needs of the athletes and their parents, coaches, physiotherapists, dieticians, and doctors who take care of people actively practicing qualified sports. The creation of the platform made it possible to constantly monitor the trainers necessary for both the proper training process and the control over the health of patients. Monitoring the patient's health by a specialized team in the field of various specialties allows for the proper targeting of the treatment and training process due to the increase in the availability of medical counseling. Specialists taking care of the patient can provide additional advice and modify the medical treatment of the patient on an ongoing basis, which is why we are dealing with a holistic approach.

Keywords: innovation of medical services, sport, obesity, innovation

Procedia PDF Downloads 118
2239 Novel Uses of Discarded Work Rolls of Cold Rolling Mills in Hot Strip Mill of Tata Steel India

Authors: Uday Shanker Goel, Vinay Vasant Mahashabde, Biswajit Ghosh, Arvind Jha, Amit Kumar, Sanjay Kumar Patel, Uma Shanker Pattanaik, Vinit Kumar Shah, Chaitanya Bhanu

Abstract:

Pinch rolls of the Hot Mills must possess resistance to wear, thermal stability, high thermal conductivity and through hardness. Conventionally, pinch rolls have been procured either as new ones or refurbished ones. Discarded Work Rolls from the Cold Mill were taken and machined inhouse at Tata Steel to be used subsequently as the bottom pinch rolls of the Hot Mill. The hardness of the scrapped work rolls from CRM is close to 55HRC and the typical composition is ( C - 0.8% , Mn - 0.40 % , Si - 0.40% , Cr - 3.5% , Mo - 0.5% & V - 0.1% ).The Innovation was the use of a roll which would otherwise have been otherwise discarded as scrap. Also, the innovation helped in using the scrapped roll which had better wear and heat resistance. In a conventional Pinch roil (Hardness 50 HRC and typical chemistry - C - 10% , Mo+Co+V+Nb ~ 5 % ) , Pick-up is a condition whereby foreign material becomes adhered to the surface of the pinch roll during service. The foreign material is usually adhered metal from the actual product being rolled. The main attributes of the weld overlay rolls are wear resistance and crack resistance. However, the weld overlay roll has a strong tendency for strip pick-up particularly in the area of bead overlap. However, the greatest disadvantage is the depth of weld deposit, which is less than half of the usable shell thickness in most mills. Because of this, the stainless rolls require re-welding on a routine basis. By providing a significantly cheaper in house and more robust alternative of the existing bottom pinch rolls , this innovation results in significant lower worries for the roll shop. Pinch rolls now don't have to be sent outside Jamshedpur for refurbishment or for procuring new ones. Scrapped rolls from adjacent Cold Mill are procured and sent for machining to our Machine Shop inside Tata Steel works in Jamshedpur. This is far more convenient than the older methodology. The idea is also being deployed to the other hot mills of Tata Steel. Multiple campaigns have been tried out at both down coilers of Hot Strip with significantly lower wear.

Keywords: hot rolling flat, cold mill work roll, hot strip pinch roll, strip surface

Procedia PDF Downloads 114
2238 Voice of Customer: Mining Customers' Reviews on On-Line Car Community

Authors: Kim Dongwon, Yu Songjin

Abstract:

This study identifies the business value of VOC (Voice of Customer) on the business. Precisely, we intend to demonstrate how much negative and positive sentiment of VOC has an influence on car sales market share in the unites states. We extract 7 emotions such as sadness, shame, anger, fear, frustration, delight and satisfaction from the VOC data, 23,204 pieces of opinions, that had been posted on car-related on-line community from 2007 to 2009(a part of data collection from 2007 to 2015), and intend to clarify the correlation between negative and positive sentimental keywords and contribution to market share. In order to develop a lexicon for each category of negative and positive sentiment, we took advantage of Corpus program, Antconc 3.4.1.w and on-line sentimental data, SentiWordNet and identified the part of speech(POS) information of words in the customers' opinion by using a part-of-speech tagging function provided by TextAnalysisOnline. For the purpose of this present study, a total of 45,741 pieces of customers' opinions of 28 car manufacturing companies had been collected including titles and status information. We conducted an experiment to examine whether the inclusion, frequency and intensity of terms with negative and positive emotions in each category affect the adoption of customer opinions for vehicle organizations' market share. In the experiment, we statistically verified that there is correlation between customer ideas containing negative and positive emotions and variation of marker share. Particularly, "Anger," a domain of negative domains, is significantly influential to car sales market share. The domain "Delight" and "Satisfaction" increased in proportion to growth of market share.

Keywords: data mining, opinion mining, sentiment analysis, VOC

Procedia PDF Downloads 206
2237 Geospatial Multi-Criteria Evaluation to Predict Landslide Hazard Potential in the Catchment of Lake Naivasha, Kenya

Authors: Abdel Rahman Khider Hassan

Abstract:

This paper describes a multi-criteria geospatial model for prediction of landslide hazard zonation (LHZ) for Lake Naivasha catchment (Kenya), based on spatial analysis of integrated datasets of location intrinsic parameters (slope stability factors) and external landslides triggering factors (natural and man-made factors). The intrinsic dataset included: lithology, geometry of slope (slope inclination, aspect, elevation, and curvature) and land use/land cover. The landslides triggering factors included: rainfall as the climatic factor, in addition to the destructive effects reflected by proximity of roads and drainage network to areas that are susceptible to landslides. No published study on landslides has been obtained for this area. Thus, digital datasets of the above spatial parameters were conveniently acquired, stored, manipulated and analyzed in a Geographical Information System (GIS) using a multi-criteria grid overlay technique (in ArcGIS 10.2.2 environment). Deduction of landslide hazard zonation is done by applying weights based on relative contribution of each parameter to the slope instability, and finally, the weighted parameters grids were overlaid together to generate a map of the potential landslide hazard zonation (LHZ) for the lake catchment. From the total surface of 3200 km² of the lake catchment, most of the region (78.7 %; 2518.4 km²) is susceptible to moderate landslide hazards, whilst about 13% (416 km²) is occurring under high hazards. Only 1.0% (32 km²) of the catchment is displaying very high landslide hazards, and the remaining area (7.3 %; 233.6 km²) displays low probability of landslide hazards. This result confirms the importance of steep slope angles, lithology, vegetation land cover and slope orientation (aspect) as the major determining factors of slope failures. The information provided by the produced map of landslide hazard zonation (LHZ) could lay the basis for decision making as well as mitigation and applications in avoiding potential losses caused by landslides in the Lake Naivasha catchment in the Kenya Highlands.

Keywords: decision making, geospatial, landslide, multi-criteria, Naivasha

Procedia PDF Downloads 190
2236 Ecological and Health Risk Assessment of the Heavy Metal Contaminant in Surface Soils around Effurun Market

Authors: A. O. Ogunkeyede, D. Amuchi, A. A. Adebayo

Abstract:

Heavy metal contaminations in soil have received great attention. Anthropogenic activities such as vehicular emission, industrial activities and constructions have resulted in elevated concentration of heavy metals in the surface soils. The metal particles can be free from the surface soil when they are disturbed and re-entrained in air, which necessitated the need to investigate surface soil at market environment where adults and children are present on daily basis. This study assesses concentration of heavy metal pollution, ecological and health risk factors in surface soil at Effurun market. 8 samples were collected at household material (EMH), fish (EMFs), fish and commodities (EMF-C), Abattoir (EMA 1 & 2), fruit sections (EMF 1 & 2) and lastly main road (EMMR). The samples were digested and analyzed in triplicate for contents of Lead (Pb), Nickel (Ni), Cadmium (Cd) and Copper (Cu). The mean concentration of the Pb mg/kg (112.27 ± 1.12) and Cu mg/kg (156.14 ± 1.10) were highest in the abattoir section (EMA 1). The mean concentrations of the heavy metal were then used to calculate the ecological and health risk for people within the market. Pb contamination at EMMR, EMF 2, EMFs were moderately while Pb shows considerable contamination at EMH, EMA 1, EMA 2 and EMF-C sections of the Effurun market. The ecological risk factor varies between low to moderate pollution for Pb and EMA 1 has the highest potential ecological risk that falls within moderate pollution. The hazard quotient results show that dermal exposure pathway is the possible means of heavy metal exposure to the traders while ingestion is the least sources of exposure to adult. The ingestion suggested that children around the EMA 1 have the highest possible exposure to children due to hand-to-mouth and object-to-mouth behaviour. The results further show that adults at the EMA1 will have the highest exposure to Pb due to inhalation during burning of cow with tyre that contained Pb and Cu. The carcinogenic risk values of most sections were higher than acceptable values, while Ni at EMMR, EMF 1 & 2, EMFs and EMF-C sections that were below the acceptable values. The cancer risk for inhalation exposure pathway for Pb (1.01E+17) shows a significant level of contamination than all the other sections of the market. It suggested that the people working at the Abattoir were very prone to cancer risk.

Keywords: carcinogenic, ecological, heavy metal, risk

Procedia PDF Downloads 133
2235 Exploring Environmental, Social, and Governance (ESG) Standards for Space Exploration

Authors: Rachael Sullivan, Joshua Berman

Abstract:

The number of satellites orbiting earth are in the thousands now. Commercial launches are increasing, and civilians are venturing into the outer reaches of the atmosphere. As the space industry continues to grow and evolve, so too will the demand on resources, the disparities amongst socio-economic groups, and space company governance standards. Outside of just ensuring that space operations are compliant with government regulations, export controls, and international sanctions, companies should also keep in mind the impact their operations will have on society and the environment. Those looking to expand their operations into outer space should remain mindful of both the opportunities and challenges that they could encounter along the way. From commercial launches promoting civilian space travel—like the recent launches from Blue Origin, Virgin Galactic, and Space X—to regulatory and policy shifts, the commercial landscape beyond the Earth's atmosphere is evolving. But practices will also have to become sustainable. Through a review and analysis of space industry trends, international government regulations, and empirical data, this research explores how Environmental, Social, and Governance (ESG) reporting and investing will manifest within a fast-changing space industry.Institutions, regulators, investors, and employees are increasingly relying on ESG. Those working in the space industry will be no exception. Companies (or investors) that are already engaging or plan to engage in space operations should consider 1) environmental standards and objectives when tackling space debris and space mining, 2) social standards and objectives when considering how such practices may impact access and opportunities for different socioeconomic groups to the benefits of space exploration, and 3) how decision-making and governing boards will function ethically, equitably, and sustainably as we chart new paths and encounter novel challenges in outer space.

Keywords: climate, environment, ESG, law, outer space, regulation

Procedia PDF Downloads 137
2234 Uncertainty Evaluation of Erosion Volume Measurement Using Coordinate Measuring Machine

Authors: Mohamed Dhouibi, Bogdan Stirbu, Chabotier André, Marc Pirlot

Abstract:

Internal barrel wear is a major factor affecting the performance of small caliber guns in their different life phases. Wear analysis is, therefore, a very important process for understanding how wear occurs, where it takes place, and how it spreads with the aim on improving the accuracy and effectiveness of small caliber weapons. This paper discusses the measurement and analysis of combustion chamber wear for a small-caliber gun using a Coordinate Measuring Machine (CMM). Initially, two different NATO small caliber guns: 5.56x45mm and 7.62x51mm, are considered. A Micura Zeiss Coordinate Measuring Machine (CMM) equipped with the VAST XTR gold high-end sensor is used to measure the inner profile of the two guns every 300-shot cycle. The CMM parameters, such us (i) the measuring force, (ii) the measured points, (iii) the time of masking, and (iv) the scanning velocity, are investigated. In order to ensure minimum measurement error, a statistical analysis is adopted to select the reliable CMM parameters combination. Next, two measurement strategies are developed to capture the shape and the volume of each gun chamber. Thus, a task-specific measurement uncertainty (TSMU) analysis is carried out for each measurement plan. Different approaches of TSMU evaluation have been proposed in the literature. This paper discusses two different techniques. The first is the substitution method described in ISO 15530 part 3. This approach is based on the use of calibrated workpieces with similar shape and size as the measured part. The second is the Monte Carlo simulation method presented in ISO 15530 part 4. Uncertainty evaluation software (UES), also known as the Virtual Coordinate Measuring Machine (VCMM), is utilized in this technique to perform a point-by-point simulation of the measurements. To conclude, a comparison between both approaches is performed. Finally, the results of the measurements are verified through calibrated gauges of several dimensions specially designed for the two barrels. On this basis, an experimental database is developed for further analysis aiming to quantify the relationship between the volume of wear and the muzzle velocity of small caliber guns.

Keywords: coordinate measuring machine, measurement uncertainty, erosion and wear volume, small caliber guns

Procedia PDF Downloads 142
2233 Institutional Quality and Tax Compliance: A Cross-Country Regression Evidence

Authors: Debi Konukcu Onal, Tarkan Cavusoglu

Abstract:

In modern societies, the costs of public goods and services are shared through taxes paid by citizens. However, taxation has always been a frictional issue, as tax obligations are perceived to be a financial burden for taxpayers rather than being merit that fulfills the redistribution, regulation and stabilization functions of the welfare state. The tax compliance literature evolves into discussing why people still pay taxes in systems with low costs of legal enforcement. Related empirical and theoretical works show that a wide range of socially oriented behavioral factors can stimulate voluntary compliance and subversive effects as well. These behavioral motivations are argued to be driven by self-enforcing rules of informal institutions, either independently or through interactions with legal orders set by formal institutions. The main focus of this study is to investigate empirically whether institutional particularities have a significant role in explaining the cross-country differences in the tax noncompliance levels. A part of the controversy about the driving forces behind tax noncompliance may be attributed to the lack of empirical evidence. Thus, this study aims to fill this gap through regression estimates, which help to trace the link between institutional quality and noncompliance on a cross-country basis. Tax evasion estimates of Buehn and Schneider is used as the proxy measure for the tax noncompliance levels. Institutional quality is quantified by three different indicators (percentile ranks of Worldwide Governance Indicators, ratings of the International Country Risk Guide, and the country ratings of the Freedom in the World). Robust Least Squares and Threshold Regression estimates based on the sample of the Organization for Economic Co-operation and Development (OECD) countries imply that tax compliance increases with institutional quality. Moreover, a threshold-based asymmetry is detected in the effect of institutional quality on tax noncompliance. That is, the negative effects of tax burdens on compliance are found to be more pronounced in countries with institutional quality below a certain threshold. These findings are robust to all alternative indicators of institutional quality, supporting the significant interaction of societal values with the individual taxpayer decisions.

Keywords: institutional quality, OECD economies, tax compliance, tax evasion

Procedia PDF Downloads 123
2232 Assessing the Cumulative Impact of PM₂.₅ Emissions from Power Plants by Using the Hybrid Air Quality Model and Evaluating the Contributing Salient Factor in South Taiwan

Authors: Jackson Simon Lusagalika, Lai Hsin-Chih, Dai Yu-Tung

Abstract:

Particles with an aerodynamic diameter of 2.5 meters or less are referred to as "fine particulate matter" (PM₂.₅) are easily inhaled and can go deeper into the lungs than other particles in the atmosphere, where it may have detrimental health consequences. In this study, we use a hybrid model that combined CMAQ and AERMOD as well as initial meteorological fields from the Weather Research and Forecasting (WRF) model to study the impact of power plant PM₂.₅ emissions in South Taiwan since it frequently experiences higher PM₂.₅ levels. A specific date of March 3, 2022, was chosen as a result of a power outage that prompted the bulk of power plants to shut down. In some way, it is not conceivable anywhere in the world to turn off the power for the sole purpose of doing research. Therefore, this catastrophe involving a power outage and the shutdown of power plants offers a great occasion to evaluate the impact of air pollution driven by this power sector. As a result, four numerical experiments were conducted in the study using the Continuous Emission Data System (CEMS), assuming that the power plants continued to function normally after the power outage. The hybrid model results revealed that power plants have a minor impact in the study region. However, we examined the accumulation of PM₂.₅ in the study and discovered that once the vortex at 925hPa was established and moved to the north of Taiwan's coast, the study region experienced higher observed PM₂.₅ concentrations influenced by meteorological factors. This study recommends that decision-makers take into account not only control techniques, specifically emission reductions, but also the atmospheric and meteorological implications for future investigations.

Keywords: PM₂.₅ concentration, powerplants, hybrid air quality model, CEMS, Vorticity

Procedia PDF Downloads 67
2231 West Nile Virus in North-Eastern Italy: Overview of Integrated Surveillance Activities

Authors: Laura Amato, Paolo Mulatti, Fabrizio Montarsi, Matteo Mazzucato, Laura Gagliazzo, Michele Brichese, Manlio Palei, Gioia Capelli, Lebana Bonfanti

Abstract:

West Nile virus (WNV) re-emerged in north-eastern Italy in 2008, after ten years from its first appearance in Tuscany. In 2009, a national surveillance programme was implemented, and re-modulated in north-eastern Italy in 2011. Hereby, we present the results of surveillance activities in 2008-2016 in the north-eastern Italian regions, with inferences on WNV epidemiological trend in the area. The re-modulated surveillance programmes aimed at early detecting WNV seasonal reactivation by searching IgM antibodies in horses. In 2013, the surveillance plans were further modified including a risk-based approach. Spatial analysis techniques, including Bernoulli space-time scan-statistics, were applied to the results of 2010–2012 surveillance on mosquitoes, equines, and humans to identify areas where WNV reactivation was more likely to occur. From 2008 to 2016, residential horses tested positive for anti-WNV antibodies on a yearly basis (503 cases), also in areas where WNV circulation was not detected in mosquito populations. Surveillance activities detected 26 syndromic cases in horses, 102 infected mosquito pools and WNV in 18 dead wild birds. Human cases were also recurrently detected in the study area during the surveillance period (68 cases of West Nile neuroinvasive disease). The recurrent identification of WNV in animals, mosquitoes, and humans indicates the virus has likely become endemic in the area. In 2016, findings of WNV positives in horses or mosquitoes were included as triggers for enhancing screening activities in humans. The evolution of the epidemiological situation prompts for continuous and accurate surveillance measures. The results of the 2013-2016 surveillance indicate that the risk-based approach was effective in early detecting seasonal reactivation of WNV, key factor of the integrated surveillance strategy in endemic areas.

Keywords: arboviruses, horses, Italy, surveillance, west nile virus, zoonoses

Procedia PDF Downloads 347
2230 Determining Components of Deflection of the Vertical in Owerri West Local Government, Imo State Nigeria Using Least Square Method

Authors: Chukwu Fidelis Ndubuisi, Madufor Michael Ozims, Asogwa Vivian Ndidiamaka, Egenamba Juliet Ngozi, Okonkwo Stephen C., Kamah Chukwudi David

Abstract:

Deflection of the vertical is a quantity used in reducing geodetic measurements related to geoidal networks to the ellipsoidal plane; and it is essential in Geoid modeling processes. Computing the deflection of the vertical component of a point in a given area is necessary in evaluating the standard errors along north-south and east-west direction. Using combined approach for the determination of deflection of the vertical component provides improved result but labor intensive without appropriate method. Least square method is a method that makes use of redundant observation in modeling a given sets of problem that obeys certain geometric condition. This research work is aimed to computing the deflection of vertical component of Owerri West local government area of Imo State using geometric method as field technique. In this method combination of Global Positioning System on static mode and precise leveling observation were utilized in determination of geodetic coordinate of points established within the study area by GPS observation and the orthometric heights through precise leveling. By least square using Matlab programme; the estimated deflections of vertical component parameters for the common station were -0.0286 and -0.0001 arc seconds for the north-south and east-west components respectively. The associated standard errors of the processed vectors of the network were computed. The computed standard errors of the North-south and East-west components were 5.5911e-005 and 1.4965e-004 arc seconds, respectively. Therefore, including the derived component of deflection of the vertical to the ellipsoidal model will yield high observational accuracy since an ellipsoidal model is not tenable due to its far observational error in the determination of high quality job. It is important to include the determined deflection of the vertical component for Owerri West Local Government in Imo State, Nigeria.

Keywords: deflection of vertical, ellipsoidal height, least square, orthometric height

Procedia PDF Downloads 195
2229 A Literature Study on IoT Based Monitoring System for Smart Agriculture

Authors: Sonu Rana, Jyoti Verma, A. K. Gautam

Abstract:

In most developing countries like India, the majority of the population heavily relies on agriculture for their livelihood. The yield of agriculture is heavily dependent on uncertain weather conditions like a monsoon, soil fertility, availability of irrigation facilities and fertilizers as well as support from the government. The agricultural yield is quite less compared to the effort put in due to inefficient agricultural facilities and obsolete farming practices on the one hand and lack of knowledge on the other hand, and ultimately agricultural community does not prosper. It is therefore essential for the farmers to improve their harvest yield by the acquisition of related data such as soil condition, temperature, humidity, availability of irrigation facilities, availability of, manure, etc., and adopt smart farming techniques using modern agricultural equipment. Nowadays, using IOT technology in agriculture is the best solution to improve the yield with fewer efforts and economic costs. The primary focus of this work-related is IoT technology in the agriculture field. By using IoT all the parameters would be monitored by mounting sensors in an agriculture field held at different places, will collect real-time data, and could be transmitted by a transmitting device like an antenna. To improve the system, IoT will interact with other useful systems like Wireless Sensor Networks. IoT is exploring every aspect, so the radio frequency spectrum is getting crowded due to the increasing demand for wireless applications. Therefore, Federal Communications Commission is reallocating the spectrum for various wireless applications. An antenna is also an integral part of the newly designed IoT devices. The main aim is to propose a new antenna structure used for IoT agricultural applications and compatible with this new unlicensed frequency band. The main focus of this paper is to present work related to these technologies in the agriculture field. This also presented their challenges & benefits. It can help in understanding the job of data by using IoT and correspondence advancements in the horticulture division. This will help to motivate and educate the unskilled farmers to comprehend the best bits of knowledge given by the huge information investigation utilizing smart technology.

Keywords: smart agriculture, IoT, agriculture technology, data analytics, smart technology

Procedia PDF Downloads 102
2228 Determination of Potential Agricultural Lands Using Landsat 8 OLI Images and GIS: Case Study of Gokceada (Imroz) Turkey

Authors: Rahmi Kafadar, Levent Genc

Abstract:

In present study, it was aimed to determine potential agricultural lands (PALs) in Gokceada (Imroz) Island of Canakkale province, Turkey. Seven-band Landsat 8 OLI images acquired on July 12 and August 13, 2013, and their 14-band combination image were used to identify current Land Use Land Cover (LULC) status. Principal Component Analysis (PCA) was applied to three Landsat datasets in order to reduce the correlation between the bands. A total of six Original and PCA images were classified using supervised classification method to obtain the LULC maps including 6 main classes (“Forest”, “Agriculture”, “Water Surface”, “Residential Area-Bare Soil”, “Reforestation” and “Other”). Accuracy assessment was performed by checking the accuracy of 120 randomized points for each LULC maps. The best overall accuracy and Kappa statistic values (90.83%, 0.8791% respectively) were found for PCA images which were generated from 14-bands combined images called 3-B/JA. Digital Elevation Model (DEM) with 15 m spatial resolution (ASTER) was used to consider topographical characteristics. Soil properties were obtained by digitizing 1:25000 scaled soil maps of rural services directorate general. Potential Agricultural Lands (PALs) were determined using Geographic information Systems (GIS). Procedure was applied considering that “Other” class of LULC map may be used for agricultural purposes in the future properties. Overlaying analysis was conducted using Slope (S), Land Use Capability Class (LUCC), Other Soil Properties (OSP) and Land Use Capability Sub-Class (SUBC) properties. A total of 901.62 ha areas within “Other” class (15798.2 ha) of LULC map were determined as PALs. These lands were ranked as “Very Suitable”, “Suitable”, “Moderate Suitable” and “Low Suitable”. It was determined that the 8.03 ha were classified as “Very Suitable” while 18.59 ha as suitable and 11.44 ha as “Moderate Suitable” for PALs. In addition, 756.56 ha were found to be “Low Suitable”. The results obtained from this preliminary study can serve as basis for further studies.

Keywords: digital elevation model (DEM), geographic information systems (GIS), gokceada (Imroz), lANDSAT 8 OLI-TIRS, land use land cover (LULC)

Procedia PDF Downloads 346
2227 Computer Simulation of Hydrogen Superfluidity through Binary Mixing

Authors: Sea Hoon Lim

Abstract:

A superfluid is a fluid of bosons that flows without resistance. In order to be a superfluid, a substance’s particles must behave like bosons, yet remain mobile enough to be considered a superfluid. Bosons are low-temperature particles that can be in all energy states at the same time. If bosons were to be cooled down, then the particles will all try to be on the lowest energy state, which is called the Bose Einstein condensation. The temperature when bosons start to matter is when the temperature has reached its critical temperature. For example, when Helium reaches its critical temperature of 2.17K, the liquid density drops and becomes a superfluid with zero viscosity. However, most materials will solidify -and thus not remain fluids- at temperatures well above the temperature at which they would otherwise become a superfluid. Only a few substances currently known to man are capable of at once remaining a fluid and manifesting boson statistics. The most well-known of these is helium and its isotopes. Because hydrogen is lighter than helium, and thus expected to manifest Bose statistics at higher temperatures than helium, one might expect hydrogen to also be a superfluid. As of today, however, no one has yet been able to produce a bulk, hydrogen superfluid. The reason why hydrogen did not form a superfluid in the past is its intermolecular interactions. As a result, hydrogen molecules are much more likely to crystallize than their helium counterparts. The key to creating a hydrogen superfluid is therefore finding a way to reduce the effect of the interactions among hydrogen molecules, postponing the solidification to lower temperature. In this work, we attempt via computer simulation to produce bulk superfluid hydrogen through binary mixing. Binary mixture is a technique of mixing two pure substances in order to avoid crystallization and enhance super fluidity. Our mixture here is KALJ H2. We then sample the partition function using this Path Integral Monte Carlo (PIMC), which is well-suited for the equilibrium properties of low-temperature bosons and captures not only the statistics but also the dynamics of Hydrogen. Via this sampling, we will then produce a time evolution of the substance and see if it exhibits superfluid properties.

Keywords: superfluidity, hydrogen, binary mixture, physics

Procedia PDF Downloads 312
2226 Association of Post-Traumatic Stress Disorder with Work Performance amongst Emergency Medical Service Personnel, Karachi, Pakistan

Authors: Salima Kerai, Muhammad Islam, Uzma Khan, Nargis Asad, Junaid Razzak, Omrana Pasha

Abstract:

Background: Pre-hospital care providers are exposed to various kinds of stressors. Their daily exposure to diverse critical and traumatic incidents can lead to stress reactions like Post-Traumatic Stress Disorder (PTSD). Consequences of PTSD in terms of work loss can be catastrophic because of its compound effect on families, which affect them economically, socially and emotionally. Therefore, it is critical to assess the association between PTSD and Work performance in Emergency Medical Service (EMS) if exist any. Methods: This prospective observational study was carried out at AMAN EMS in Karachi, Pakistan. EMS personnel were screened for potential PTSD using impact of event scale-revised (IES-R). Work performance was assessed on basis of five variables; number of late arrivals to work, number of days absent, number of days sick, adherence to protocol and patient satisfaction survey over the period of 3 months. In order to model outcomes like number of late arrivals to work, days absent and days late; negative binomial regression was used whereas logistic regression was applied for adherence to protocol and linear for patient satisfaction scores. Results: Out of 536 EMS personnel, 525 were found to be eligible, of them 518 consented. However data on 507 were included because 7 left the job during study period. The mean score of PTSD was found to be 24.0 ± 12.2. However, weak and insignificant association was found between PTSD and work performance measures: number of late arrivals (RRadj 0.99; 95% CI 0.98-1.00), days absent (RRadj 0.98; 95% CI 0.96-0.99), days sick (Rradj 0.99; 95% CI 0.98 to 1.00), adherence to protocol (ORadj 1.01: 95% CI 0.99 to 1.04) and patient satisfaction (0.001% score; 95% CI -0.03% to 0.03%). Conclusion: No association was found between PTSD and Work performance in the selected EMS population in Karachi Pakistan. Further studies are needed to explore the phenomenon of resiliency in these populations. Moreover, qualitative work is required to explore perceptions and feelings like willingness to go to work, readiness to carry out job responsibilities.

Keywords: trauma, emergency medical service, stress, pakistan

Procedia PDF Downloads 326
2225 Wireless Gyroscopes for Highly Dynamic Objects

Authors: Dmitry Lukyanov, Sergey Shevchenko, Alexander Kukaev

Abstract:

Modern MEMS gyroscopes have strengthened their position in motion control systems and have led to the creation of tactical grade sensors (better than 15 deg/h). This was achieved by virtue of the success in micro- and nanotechnology development, cooperation among international experts and the experience gained in the mass production of MEMS gyros. This production is knowledge-intensive, often unique and, therefore, difficult to develop, especially due to the use of 3D-technology. The latter is usually associated with manufacturing of inertial masses and their elastic suspension, which determines the vibration and shock resistance of gyros. Today, consumers developing highly dynamic objects or objects working under extreme conditions require the gyro shock resistance of up to 65 000 g and the measurement range of more than 10 000 deg/s. Such characteristics can be achieved by solid-state gyroscopes (SSG) without inertial masses or elastic suspensions, which, for example, can be constructed with molecular kinetics of bulk or surface acoustic waves (SAW). Excellent effectiveness of this sensors production and a high level of structural integration provides basis for increased accuracy, size reduction and significant drop in total production costs. Existing principles of SAW-based sensors are based on the theory of SAW propagation in rotating coordinate systems. A short introduction to the theory of a gyroscopic (Coriolis) effect in SAW is provided in the report. Nowadays more and more applications require passive and wireless sensors. SAW-based gyros provide an opportunity to create one. Several design concepts incorporating reflective delay lines were proposed in recent years, but faced some criticism. Still, the concept is promising and is being of interest in St. Petersburg Electrotechnical University. Several experimental models were developed and tested to find the minimal configuration of a passive and wireless SAW-based gyro. Structural schemes, potential characteristics and known limitations are stated in the report. Special attention is dedicated to a novel method of a FEM modeling with piezoelectric and gyroscopic effects simultaneously taken into account.

Keywords: FEM simulation, gyroscope, OOFELIE, surface acoustic wave, wireless sensing

Procedia PDF Downloads 359
2224 Physiotherapy Assessment of People with Neurological Conditions in Australia: A National Survey of Clinical Practice

Authors: Jill Garner, Belinda Lange, Sheila Lennon, Maayken van den Berg

Abstract:

Currently, there are approximately one billion people worldwide affected by a neurological condition. Many of whom are assessed and treated by a physiotherapist in a variety of settings. There is a lack of consensus in the literature related to what is clinically assessed by physiotherapists in people with neurological conditions. This study aimed to explore assessment in people with neurological conditions, including how health care setting, experience, and therapeutic approach, may influence neurological assessment. A national survey targeted Australian physiotherapists who assess adults with neurological conditions as part of their clinical practice. The survey consisted of 39 questions and was distributed to physiotherapists through the Australian Physiotherapy Association, and Chief Allied Health Officers across Australia and advertised on the National Neurological Physiotherapy Facebook page. In total, 395 respondents consented to the survey from all states within Australia. Most respondents were female (85.4%) with a mean (SD) age of 35.7 years. Respondents reported working clinically in acute, community, outpatients, and community settings. Stroke was the most assessed condition (58.0%). There is variability in domains assessed by Australian physiotherapists, with common inclusions of balance, muscle strength, gait, falls and safety, function, goal setting, range of movement, pain, coordination, activity tolerance, postural alignment and symmetry and upper limb. There is little evidence to support what physiotherapists assess in practice, in different settings, and in different states within Australia and not enough information to develop a decision tree regarding what is important for assessment in different settings. Further research is needed to explore this area and develop a consensus around best practices.

Keywords: physiotherapy, neurological, assessment, domains

Procedia PDF Downloads 82
2223 Study of Efficiency of Flying Animal Using Computational Simulation

Authors: Ratih Julistina, M. Agoes Moelyadi

Abstract:

Innovation in aviation technology evolved rapidly by time to time for acquiring the most favorable value of utilization and is usually denoted by efficiency parameter. Nature always become part of inspiration, and for this sector, many researchers focused on studying the behavior of flying animal to comprehend the fundamental, one of them is birds. Experimental testing has already conducted by several researches to seek and calculate the efficiency by putting the object in wind tunnel. Hence, computational simulation is needed to conform the result and give more visualization which is based on Reynold Averaged Navier-Stokes equation solution for unsteady case in time-dependent viscous flow. By creating model from simplification of the real bird as a rigid body, those are Hawk which has low aspect ratio and Swift with high aspect ratio, subsequently generating the multi grid structured mesh to capture and calculate the aerodynamic behavior and characteristics. Mimicking the motion of downstroke and upstroke of bird flight which produced both lift and thrust, the sinusoidal function is used. Simulation is carried out for varied of flapping frequencies within upper and lower range of actual each bird’s frequency which are 1 Hz, 2.87 Hz, 5 Hz for Hawk and 5 Hz, 8.9 Hz, 13 Hz for Swift to investigate the dependency of frequency effecting the efficiency of aerodynamic characteristics production. Also, by comparing the result in different condition flights with the morphology of each bird. Simulation has shown that higher flapping frequency is used then greater aerodynamic coefficient is obtained, on other hand, efficiency on thrust production is not the same. The result is analyzed from velocity and pressure contours, mesh movement as to see the behavior.

Keywords: characteristics of aerodynamic, efficiency, flapping frequency, flapping wing, unsteady simulation

Procedia PDF Downloads 231
2222 Enhancing Cultural Heritage Data Retrieval by Mapping COURAGE to CIDOC Conceptual Reference Model

Authors: Ghazal Faraj, Andras Micsik

Abstract:

The CIDOC Conceptual Reference Model (CRM) is an extensible ontology that provides integrated access to heterogeneous and digital datasets. The CIDOC-CRM offers a “semantic glue” intended to promote accessibility to several diverse and dispersed sources of cultural heritage data. That is achieved by providing a formal structure for the implicit and explicit concepts and their relationships in the cultural heritage field. The COURAGE (“Cultural Opposition – Understanding the CultuRal HeritAGE of Dissent in the Former Socialist Countries”) project aimed to explore methods about socialist-era cultural resistance during 1950-1990 and planned to serve as a basis for further narratives and digital humanities (DH) research. This project highlights the diversity of flourished alternative cultural scenes in Eastern Europe before 1989. Moreover, the dataset of COURAGE is an online RDF-based registry that consists of historical people, organizations, collections, and featured items. For increasing the inter-links between different datasets and retrieving more relevant data from various data silos, a shared federated ontology for reconciled data is needed. As a first step towards these goals, a full understanding of the CIDOC CRM ontology (target ontology), as well as the COURAGE dataset, was required to start the work. Subsequently, the queries toward the ontology were determined, and a table of equivalent properties from COURAGE and CIDOC CRM was created. The structural diagrams that clarify the mapping process and construct queries are on progress to map person, organization, and collection entities to the ontology. Through mapping the COURAGE dataset to CIDOC-CRM ontology, the dataset will have a common ontological foundation with several other datasets. Therefore, the expected results are: 1) retrieving more detailed data about existing entities, 2) retrieving new entities’ data, 3) aligning COURAGE dataset to a standard vocabulary, 4) running distributed SPARQL queries over several CIDOC-CRM datasets and testing the potentials of distributed query answering using SPARQL. The next plan is to map CIDOC-CRM to other upper-level ontologies or large datasets (e.g., DBpedia, Wikidata), and address similar questions on a wide variety of knowledge bases.

Keywords: CIDOC CRM, cultural heritage data, COURAGE dataset, ontology alignment

Procedia PDF Downloads 137
2221 Creating a Rehabilitation Product as an Example of Design Management

Authors: K. Caban-Piaskowska

Abstract:

The aim of the article is to show how the role of a designer has changed, from the point of view of human resources management and thanks to the increased importance of design management, and is to present how a rehabilitation product, through technology approach to designing, becomes a universal product. Designing for the disabled is a very undiscovered area on the pattern-designing market, most often because it is associated with devices which support rehabilitation. In consequence, it means that the realizations have a limited group of receivers and are not that attractive for designers. The relation between using modern design in building rehabilitation devices and increasing the efficiency of treatment and physiotherapy. Using modern technology can have marketing significance. Rehabilitation products designed and produced in a modern way makes an impression that experts and professionals are involved in the lives of the user – patient. In order to illustrate the problem presented above i.e. Creating a rehabilitation product as an example of design management, the case study method was used in the research. The analysis of the case was created on the basis of an interview conducted by the author with a designer who took part in meetings with people who use rehabilitation and their physiotherapists, and created universal products in Poland in the years of 2012 to 2017. Usually, engineers and constructors deal with creating products which remind us of old torture devices, however, they are indestructible in construction. Such image of those products for the disabled clearly indicates that it is a wonderful niche for designers and emphasizes the need to make those products more attractive and innovative. Products for the disabled cannot be limited to rehabilitation equipment only e.g. wheelchairs or standing frames. Introducing the idea of universal designing can significantly broaden the circle of pattern-designing receivers – everyday-use items – with the disabled people. Fulfilling these criteria will decide about the advantage on the competitive market. It is possible due to the usage of the design management concept in the functioning of an organization. Using modern technology and materials in the production of equipment, and changing the role of a designer broadening the circle of receivers by designing a wide use process which makes it possible to use the product by people with various needs. What is more, introducing rehabilitation functions in everyday-use items can also become an innovative accent in designing. In the reality of the market, each group of users can and should be treated as a problem and a realization task.

Keywords: design management, innovation, rehabilitation product, universal product

Procedia PDF Downloads 186
2220 Self-Tuning Power System Stabilizer Based on Recursive Least Square Identification and Linear Quadratic Regulator

Authors: J. Ritonja

Abstract:

Available commercial applications of power system stabilizers assure optimal damping of synchronous generator’s oscillations only in a small part of operating range. Parameters of the power system stabilizer are usually tuned for the selected operating point. Extensive variations of the synchronous generator’s operation result in changed dynamic characteristics. This is the reason that the power system stabilizer tuned for the nominal operating point does not satisfy preferred damping in the overall operation area. The small-signal stability and the transient stability of the synchronous generators have represented an attractive problem for testing different concepts of the modern control theory. Of all the methods, the adaptive control has proved to be the most suitable for the design of the power system stabilizers. The adaptive control has been used in order to assure the optimal damping through the entire synchronous generator’s operating range. The use of the adaptive control is possible because the loading variations and consequently the variations of the synchronous generator’s dynamic characteristics are, in most cases, essentially slower than the adaptation mechanism. The paper shows the development and the application of the self-tuning power system stabilizer based on recursive least square identification method and linear quadratic regulator. Identification method is used to calculate the parameters of the Heffron-Phillips model of the synchronous generator. On the basis of the calculated parameters of the synchronous generator’s mathematical model, the synthesis of the linear quadratic regulator is carried-out. The identification and the synthesis are implemented on-line. In this way, the self-tuning power system stabilizer adapts to the different operating conditions. A purpose of this paper is to contribute to development of the more effective power system stabilizers, which would replace currently used linear stabilizers. The presented self-tuning power system stabilizer makes the tuning of the controller parameters easier and assures damping improvement in the complete operating range. The results of simulations and experiments show essential improvement of the synchronous generator’s damping and power system stability.

Keywords: adaptive control, linear quadratic regulator, power system stabilizer, recursive least square identification

Procedia PDF Downloads 237
2219 Online Monitoring of Airborne Bioaerosols Released from a Composting, Green Waste Site

Authors: John Sodeau, David O'Connor, Shane Daly, Stig Hellebust

Abstract:

This study is the first to employ the online WIBS (Waveband Integrated Biosensor Sensor) technique for the monitoring of bioaerosol emissions and non-fluorescing “dust” released from a composting/green waste site. The purpose of the research was to provide a “proof of principle” for using WIBS to monitor such a location continually over days and nights in order to construct comparative “bioaerosol site profiles”. Current impaction/culturing methods take many days to achieve results available by the WIBS technique in seconds.The real-time data obtained was then used to assess variations of the bioaerosol counts as a function of size, “shape”, site location, working activity levels, time of day, relative humidity, wind speeds and wind directions. Three short campaigns were undertaken, one classified as a “light” workload period, another as a “heavy” workload period and finally a weekend when the site was closed. One main bioaerosol size regime was found to predominate: 0.5 micron to 3 micron with morphologies ranging from elongated to elipsoidal/spherical. The real-time number-concentration data were consistent with an Andersen sampling protocol that was employed at the site. The number-concentrations of fluorescent particles as a proportion of total particles counted amounted, on average, to ~1% for the “light” workday period, ~7% for the “heavy” workday period and ~18% for the weekend. The bioaerosol release profiles at the weekend were considerably different from those monitored during the working weekdays.

Keywords: bioaerosols, composting, fluorescence, particle counting in real-time

Procedia PDF Downloads 345
2218 Highly Oriented and Conducting SNO2 Doped Al and SB Layers Grown by Automatic Spray Pyrolysis Method

Authors: A.Boularouk, F. Chouikh, M. Lamri, H. Moualkia, Y. Bouznit

Abstract:

The principal aim of this study is to considerably reduce the resistivity of the SnO2 thin layers. In this order, we have doped tin oxide with aluminum and antimony incorporation with different atomic percentages (0 and 4%). All the pure and doped SnO2 films were grown by simple, flexible and cost-effective Automatic Spray Pyrolysis Method (ASPM) on glass substrates at a temperature of 350 °C. The microstructural, optical, morphological and electrical properties of the films have been studied. The XRD results demonstrate that all films have polycrystalline nature with a tetragonal rutile structure and exhibit the (200) preferential orientation. It has been observed that all the dopants are soluble in the SnO2 matrix without forming secondary phases. However, dopant introduction does not modify the film growth orientation. The crystallite size of the pure SnO2 film is about 36 nm. The films are highly transparent in the visible region with an average transmittance reaching up to 80% and it slightly reduces with increasing doping concentration (Al and Sb). The optical band gap value was evaluated between 3.60 eV and 3.75 eV as a function of doping. The SEM image reveals that all films are nanostructured, densely continuous, with good adhesion to the substrate. We note again that the surface morphology change with the type and concentration dopant. The minimum resistivity is 0.689*10-4, which is observed for SnO2 film doped 4% Al. This film shows better properties and is considered the best among all films. Finally, we concluded that the physical properties of the pure and doped SnO2 films grown on a glass substrate by ASPM strongly depend on the type and concentration dopant (Al and Sb) and have highly desirable optical and electrical properties and are promising materials for several applications.

Keywords: tin oxide, automatic spray, Al and Sb doped, transmittance, MEB, XRD and UV-VIS

Procedia PDF Downloads 55
2217 Using the Smith-Waterman Algorithm to Extract Features in the Classification of Obesity Status

Authors: Rosa Figueroa, Christopher Flores

Abstract:

Text categorization is the problem of assigning a new document to a set of predetermined categories, on the basis of a training set of free-text data that contains documents whose category membership is known. To train a classification model, it is necessary to extract characteristics in the form of tokens that facilitate the learning and classification process. In text categorization, the feature extraction process involves the use of word sequences also known as N-grams. In general, it is expected that documents belonging to the same category share similar features. The Smith-Waterman (SW) algorithm is a dynamic programming algorithm that performs a local sequence alignment in order to determine similar regions between two strings or protein sequences. This work explores the use of SW algorithm as an alternative to feature extraction in text categorization. The dataset used for this purpose, contains 2,610 annotated documents with the classes Obese/Non-Obese. This dataset was represented in a matrix form using the Bag of Word approach. The score selected to represent the occurrence of the tokens in each document was the term frequency-inverse document frequency (TF-IDF). In order to extract features for classification, four experiments were conducted: the first experiment used SW to extract features, the second one used unigrams (single word), the third one used bigrams (two word sequence) and the last experiment used a combination of unigrams and bigrams to extract features for classification. To test the effectiveness of the extracted feature set for the four experiments, a Support Vector Machine (SVM) classifier was tuned using 20% of the dataset. The remaining 80% of the dataset together with 5-Fold Cross Validation were used to evaluate and compare the performance of the four experiments of feature extraction. Results from the tuning process suggest that SW performs better than the N-gram based feature extraction. These results were confirmed by using the remaining 80% of the dataset, where SW performed the best (accuracy = 97.10%, weighted average F-measure = 97.07%). The second best was obtained by the combination of unigrams-bigrams (accuracy = 96.04, weighted average F-measure = 95.97) closely followed by the bigrams (accuracy = 94.56%, weighted average F-measure = 94.46%) and finally unigrams (accuracy = 92.96%, weighted average F-measure = 92.90%).

Keywords: comorbidities, machine learning, obesity, Smith-Waterman algorithm

Procedia PDF Downloads 288