Search results for: honeycomb panel
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 954

Search results for: honeycomb panel

144 Effects of a Head Mounted Display Adaptation on Reaching Behaviour: Implications for a Therapeutic Approach in Unilateral Neglect

Authors: Taku Numao, Kazu Amimoto, Tomoko Shimada, Kyohei Ichikawa

Abstract:

Background: Unilateral spatial neglect (USN) is a common syndrome following damage to one hemisphere of the brain (usually the right side), in which a patient fails to report or respond to stimulation from the contralesional side. These symptoms are not due to primary sensory or motor deficits, but instead, reflect an inability to process input from that side of their environment. Prism adaptation (PA) is a therapeutic treatment for USN, wherein a patient’s visual field is artificially shifted laterally, resulting in a sensory-motor adaptation. However, patients with USN also tend to perceive a left-leaning subjective vertical in the frontal plane. The traditional PA cannot be used to correct a tilt in the subjective vertical, because a prism can only polarize, not twist, the surroundings. However, this can be accomplished using a head mounted display (HMD) and a web-camera. Therefore, this study investigated whether an HMD system could be used to correct the spatial perception of USN patients in the frontal as well as the horizontal plane. We recruited healthy subjects in order to collect data for the refinement of USN patient therapy. Methods: Eight healthy subjects sat on a chair wearing a HMD (Oculus rift DK2), with a web-camera (Ovrvision) displaying a 10 degree leftward rotation and a 10 degree counter-clockwise rotation along the frontal plane. Subjects attempted to point a finger at one of four targets, assigned randomly, a total of 48 times. Before and after the intervention, each subject’s body-centre judgment (BCJ) was tested by asking them to point a finger at a touch panel straight in front of their xiphisternum, 10 times sight unseen. Results: Intervention caused the location pointed to during the BCJ to shift 35 ± 17 mm (Ave ± SD) leftward in the horizontal plane, and 46 ± 29 mm downward in the frontal plane. The results in both planes were significant by paired-t-test (p<.01). Conclusions: The results in the horizontal plane are consistent with those observed following PA. Furthermore, the HMD and web-camera were able to elicit 3D effects, including in both the horizontal and frontal planes. Future work will focus on applying this method to patients with and without USN, and investigating whether subject posture is also affected by the HMD system.

Keywords: head mounted display, posture, prism adaptation, unilateral spatial neglect

Procedia PDF Downloads 255
143 Exhaled Breath Condensate in Lung Cancer: A Non-Invasive Sample for Easier Mutations Detection by Next Generation Sequencing

Authors: Omar Youssef, Aija Knuuttila, Paivi Piirilä, Virinder Sarhadi, Sakari Knuutila

Abstract:

Exhaled breath condensate (EBC) is a unique sample that allows studying different genetic changes in lung carcinoma through a non-invasive way. With the aid of next generation sequencing (NGS) technology, analysis of genetic mutations has been more efficient with increased sensitivity for detection of genetic variants. In order to investigate the possibility of applying this method for cancer diagnostics, mutations in EBC DNA from lung cancer patients and healthy individuals were studied by using NGS. The key aim is to assess the feasibility of using this approach to detect clinically important mutations in EBC. EBC was collected from 20 healthy individuals and 9 lung cancer patients (four lung adenocarcinomas, four 8 squamous cell carcinoma, and one case of mesothelioma). Mutations in hotpot regions of 22 genes were studied by using Ampliseq Colon and Lung cancer panel and sequenced on Ion PGM. Results demonstrated that all nine patients showed a total of 19 cosmic mutations in APC, BRAF, EGFR, ERBB4, FBXW7, FGFR1, KRAS, MAP2K1, NRAS, PIK3CA, PTEN, RET, SMAD4, and TP53. In controls, 15 individuals showed 35 cosmic mutations in BRAF, CTNNB1, DDR2, EGFR, ERBB2, FBXW7, FGFR3, KRAS, MET, NOTCH1, NRAS, PIK3CA, PTEN, SMAD4, and TP53. Additionally, 45 novel mutations not reported previously were also seen in patients’ samples, and 106 novel mutations were seen in controls’ specimens. KRAS exon 2 mutations G12D was identified in one control specimen with mutant allele fraction of 6.8%, while KRAS G13D mutation seen in one patient sample showed mutant allele fraction of 17%. These findings illustrate that hotspot mutations are present in DNA from EBC of both cancer patients and healthy controls. As some of the cosmic mutations were seen in controls too, no firm conclusion can be drawn on the clinical importance of cosmic mutations in patients. Mutations reported in controls could represent early neoplastic changes or normal homeostatic process of apoptosis occurring in lung tissue to get rid of mutant cells. At the same time, mutations detected in patients might represent a non-invasive easily accessible way for early cancer detection. Follow up of individuals with important cancer mutations is necessary to clarify the significance of these mutations in both healthy individuals and cancer patients.

Keywords: exhaled breath condensate, lung cancer, mutations, next generation sequencing

Procedia PDF Downloads 147
142 Ionometallurgy for Recycling Silver in Silicon Solar Panel

Authors: Emmanuel Billy

Abstract:

This work is in the CABRISS project (H2020 projects) which aims at developing innovative cost-effective methods for the extraction of materials from the different sources of PV waste: Si based panels, thin film panels or Si water diluted slurries. Aluminum, silicon, indium, and silver will especially be extracted from these wastes in order to constitute materials feedstock which can be used later in a closed-loop process. The extraction of metals from silicon solar cells is often an energy-intensive process. It requires either smelting or leaching at elevated temperature, or the use of large quantities of strong acids or bases that require energy to produce. The energy input equates to a significant cost and an associated CO2 footprint, both of which it would be desirable to reduce. Thus there is a need to develop more energy-efficient and environmentally-compatible processes. Thus, ‘ionometallurgy’ could offer a new set of environmentally-benign process for metallurgy. This work demonstrates that ionic liquids provide one such method since they can be used to dissolve and recover silver. The overall process associates leaching, recovery and the possibility to re-use the solution in closed-loop process. This study aims to evaluate and compare different ionic liquids to leach and recover silver. An electrochemical analysis is first implemented to define the best system for the Ag dissolution. Effects of temperature, concentration and oxidizing agent are evaluated by this approach. Further, a comparative study between conventional approach (nitric acid, thiourea) and the ionic liquids (Cu and Al) focused on the leaching efficiency is conducted. A specific attention has been paid to the selection of the Ionic Liquids. Electrolytes composed of chelating anions are used to facilitate the lixiviation (Cl, Br, I,), avoid problems dealing with solubility issues of metallic species and of classical additional ligands. This approach reduces the cost of the process and facilitates the re-use of the leaching medium. To define the most suitable ionic liquids, electrochemical experiments have been carried out to evaluate the oxidation potential of silver include in the crystalline solar cells. Then, chemical dissolution of metals for crystalline solar cells have been performed for the most promising ionic liquids. After the chemical dissolution, electrodeposition has been performed to recover silver under a metallic form.

Keywords: electrodeposition, ionometallurgy, leaching, recycling, silver

Procedia PDF Downloads 223
141 Development of Chronic Obstructive Pulmonary Disease (COPD) Proforma (E-ICP) to Improve Guideline Adherence in Emergency Department: Modified Delphi Study

Authors: Hancy Issac, Gerben Keijzers, Ian Yang, Clint Moloney, Jackie Lea, Melissa Taylor

Abstract:

Introduction: Chronic obstructive pulmonary disease guideline non-adherence is associated with a reduction in health-related quality of life in patients (HRQoL). Improving guideline adherence has the potential to mitigate fragmented care thereby sustaining pulmonary function, preventing acute exacerbations, reducing economic health burdens, and enhancing HRQoL. The development of an electronic proforma stemming from expert consensus, including digital guideline resources and direct interdisciplinary referrals is hypothesised to improve guideline adherence and patient outcomes for emergency department (ED) patients with COPD. Aim: The aim of this study was to develop consensus among ED and respiratory staff for the correct composition of a COPD electronic proforma that aids in guideline adherence and management in the ED. Methods: This study adopted a mixed-method design to develop the most important indicators of care in the ED. The study involved three phases: (1) a systematic literature review and qualitative interdisciplinary staff interviews to assess barriers and solutions for guideline adherence and qualitative interdisciplinary staff interviews, (2) a modified Delphi panel to select interventions for the proforma, and (3) a consensus process through three rounds of scoring through a quantitative survey (ED and Respiratory consensus) and qualitative thematic analysis on each indicator. Results: The electronic proforma achieved acceptable and good internal consistency through all iterations from national emergency department and respiratory department interdisciplinary experts. Cronbach’s alpha score for internal consistency (α) in iteration 1 emergency department cohort (EDC) (α = 0.80 [CI = 0.89%]), respiratory department cohort (RDC) (α = 0.95 [CI = 0.98%]). Iteration 2 reported EDC (α = 0.85 [CI = 0.97%]) and RDC (α = 0.86 [CI = 0.97%]). Iteration 3 revealed EDC (α = 0.73 [CI = 0.91%]) and RDC (α = 0.86 [CI = 0.95%]), respectively. Conclusion: Electronic proformas have the potential to facilitate direct referrals from the ED leading to reduced hospital admissions, reduced length of hospital stays, holistic care, improved health care and quality of life and improved interdisciplinary guideline adherence.

Keywords: COPD, electronic proforma, modified delphi study, interdisciplinary, guideline adherence, COPD-X plan

Procedia PDF Downloads 27
140 Is the Addition of Computed Tomography with Angiography Superior to a Non-Contrast Neuroimaging Only Strategy for Patients with Suspected Stroke or Transient Ischemic Attack Presenting to the Emergency Department?

Authors: Alisha M. Ebrahim, Bijoy K. Menon, Eddy Lang, Shelagh B. Coutts, Katie Lin

Abstract:

Introduction: Frontline emergency physicians require clear and evidence-based approaches to guide neuroimaging investigations for patients presenting with suspected acute stroke or transient ischemic attack (TIA). Various forms of computed tomography (CT) are currently available for initial investigation, including non-contrast CT (NCCT), CT angiography head and neck (CTA), and CT perfusion (CTP). However, there is uncertainty around optimal imaging choice for cost-effectiveness, particularly for minor or resolved neurological symptoms. In addition to the cost of CTA and CTP testing, there is also a concern for increased incidental findings, which may contribute to the burden of overdiagnosis. Methods: In this cross-sectional observational study, analysis was conducted on 586 anonymized triage and diagnostic imaging (DI) reports for neuroimaging orders completed on patients presenting to adult emergency departments (EDs) with a suspected stroke or TIA from January-December 2019. The primary outcome of interest is the diagnostic yield of NCCT+CTA compared to NCCT alone for patients presenting to urban academic EDs with Canadian Emergency Department Information System (CEDIS) complaints of “symptoms of stroke” (specifically acute stroke and TIA indications). DI reports were coded into 4 pre-specified categories (endorsed by a panel of stroke experts): no abnormalities, clinically significant findings (requiring immediate or follow-up clinical action), incidental findings (not meeting prespecified criteria for clinical significance), and both significant and incidental findings. Standard descriptive statistics were performed. A two-sided p-value <0.05 was considered significant. Results: 75% of patients received NCCT+CTA imaging, 21% received NCCT alone, and 4% received NCCT+CTA+CTP. The diagnostic yield of NCCT+CTA imaging for prespecified clinically significant findings was 24%, compared to only 9% in those who received NCCT alone. The proportion of incidental findings was 30% in the NCCT only group and 32% in the NCCT+CTA group. CTP did not significantly increase the yield of significant or incidental findings. Conclusion: In this cohort of patients presenting with suspected stroke or TIA, an NCCT+CTA neuroimaging strategy had a higher diagnostic yield for clinically significant findings than NCCT alone without significantly increasing the number of incidental findings identified.

Keywords: stroke, diagnostic yield, neuroimaging, emergency department, CT

Procedia PDF Downloads 75
139 Moderating Effect of Owner's Influence on the Relationship between the Probability of Client Failure and Going Concern Opinion Issuance

Authors: Mohammad Noor Hisham Osman, Ahmed Razman Abdul Latiff, Zaidi Mat Daud, Zulkarnain Muhamad Sori

Abstract:

The problem that Malaysian auditors do not issue going concern opinion (GC opinion) to seriously financially distressed companies is still a pressing issue. Policy makers, particularly the Financial Statement Review Committee (FSRC) of Malaysian Institute of Accountant, have raised this issue as early as in 2009. Similar problem happened in the US, UK, and many developing countries. It is important for auditors to issue GC opinion properly because such opinion is one signal about the viability of a company much needed by stakeholders. There are at least two unanswered questions or research gaps in the literature on determinants of GC opinion. Firstly, is client’s probability of failure associated with GC opinion issuance? Secondly, to what extent influential owners (management, family, and institution) moderate the association between client probability of failure and GC opinion issuance. The objective of this study is, therefore, twofold; (1) To examine the extent of the relationship between the probability of client failure and the issuance of GC opinion and (2) To examine the level of management, family, and institutional ownerships moderate the association between client probability of failure and the issuance of GC opinion. This study is quantitative in nature, and the sources of data are secondary (mainly company’s annual reports). A total of four hypotheses have been developed and tested on data accumulated from annual reports of seriously financially distressed Malaysian public listed companies. Data from 2006 to 2012 on a sample of 644 observations have been analyzed using panel logistic regression. It is found that certainty (rather than probability) of client failure affects the issuance of GC opinion. In addition, it is found that only the level of family ownership does positively moderate the relationship between client probability of failure and GC opinion issuance. This study is a contribution to auditing literature as its findings can enhance our understanding about audit quality; particularly on the variables that are associated with the issuance of GC opinion. The findings of this study shed light on the roles family owners in GC opinion issuance process, and this would open ways for the researcher to suggest measures that can be used to tackle the problem of auditors do not want to issue GC opinion to financially distressed clients. The measures to be suggested can be useful to policy makers in formulating future promulgations.

Keywords: audit quality, auditing, auditor characteristics, going concern opinion, Malaysia

Procedia PDF Downloads 230
138 An Evaluation of the Relationship between the Anthropometric Measurements and Blood Lipid Profiles in Adolescents

Authors: Nalan Hakime Nogay

Abstract:

Childhood obesity is a significant health issue that is currently on the rise all over the world. In recent years, the relationship between childhood obesity and cardiovascular disease risk has been pointed out. The purpose of this study is to evaluate the relationship between some of the anthropometric indicators and blood lipid levels in adolescents. The present study has been conducted on a total of 252 adolescents -200 girls and 52 boys- within an age group of 12 to 18 years. Blood was drawn from each participant in the morning -after having fasted for 10 hours from the day before- to analyze their total cholesterol, HDL, LDL and triglyceride levels. Their body weight, height, waist circumference, subscapular skinfold thicknesses and triceps skinfold thicknesses measurements were taken and their individual waist/height ratios, BMI and body fat ratios were calculated. The blood lipid levels of the participants were categorized as acceptable, borderline and high in accordance with the 2011 Expert Panel Integrated Guidelines. The body fat ratios, total blood cholesterol and HDL levels of the girls were significantly higher than the boys whereas their waist circumference values were lower. The triglyceride levels, total cholesterol/HDL, LDL/HDL, triglyceride/HDL ratios of the group with the BMI ≥ 95 percentile ratio (the obese group) were higher than the groups that were considered to be overweight and normal weight as per their respective BMI values, while the HDL level of the obese group was lower; a fact that was found to be statistically significant. No significant relationship could be established, however, between the total blood cholesterol and LDL levels with their anthropometric measurements. The BMI, waist circumference, waist/height ratio, body fat ratio and triglyceride level of the group with the higher triglyceride level ( ≥ 130mg/dl) were found to be significantly higher compared to borderline (90-129 mg/dl) and the normal group (< 90 mg/dl). The BMI, waist circumference, waist/height ratio values of the group with the lower HDL level ( < 40 mg/dl) were significantly higher than the normal ( > 45 mg/dl) and borderline (40-45 mg/dl) groups. All of the anthropometric measurements of the group with the higher triglyceride/HDL ratio ( ≥ 3) were found to be significantly higher than that of the group with the lower ratio (< 3). Having a high BMI, waist/height ratio and waist circumference is related to low HDL and high blood triglyceride and triglyceride/HDL ratio. A high body fat ratio, on the other hand, is associated with a low HDL and high triglyceride/HDL ratio. Tackling childhood and adolescent obesity are important in terms of preventing cardiovascular diseases.

Keywords: adolescent, body fat, body mass index, lipid profile

Procedia PDF Downloads 238
137 Towards a Doughnut Economy: The Role of Institutional Failure

Authors: Ghada El-Husseiny, Dina Yousri, Christian Richter

Abstract:

Social services are often characterized by market failures, which justifies government intervention in the provision of these services. It is widely acknowledged that government intervention breeds corruption since resources are being transferred from one party to another. However, what is still being extensively studied is the magnitude of the negative impact of corruption on publicly provided services and development outcomes. Corruption has the power to hinder development and cripple our march towards the Sustainable Development Goals. Corruption diminishes the efficiency and effectiveness of public health and education spending and directly impacts the outcomes of these sectors. This paper empirically examines the impact of Institutional Failure on public sector services provision, with the sole purpose of studying the impact of corruption on SDG3 and 4; Good health and wellbeing and Quality education, respectively. The paper explores the effect of corruption on these goals from various perspectives and extends the analysis by examining if the impact of corruption on these goals differed when it accounted for the current corruption state. Using Pooled OLS(Ordinary Least Square) and Fixed effects panel estimation on 22 corrupt and 22 clean countries between 2000 and 2017. Results show that corruption in both corrupt and clean countries has a more severe impact on Health than the Education sector. In almost all specifications, corruption has an insignificant effect on School Enrollment rates but a significant effect on Infant Mortality rates. Results further indicate that, on average, a 1 point increase in the CPI(Consumer Price Index) can increase health expenditures by 0.116% in corrupt and clean countries. However, the fixed effects model indicates that the way Health and Education expenditures are determined in clean and corrupt countries are completely country-specific, in which corruption plays a minimal role. Moreover, the findings show that School Enrollment rates and Infant Mortality rates depend, to a large extent, on public spending. The most astounding results-driven is that corrupt countries, on average, have more effective and efficient healthcare expenditures. While some insights are provided as to why these results prevail, they should be further researched. All in all, corruption impedes development outcomes, and any Anti-corrupt policies taken will bring forth immense improvements and speed up the march towards sustainability.

Keywords: corruption, education, health, public spending, sustainable development

Procedia PDF Downloads 141
136 Effects of the Affordable Care Act On Preventive Care Disparities

Authors: Cagdas Agirdas

Abstract:

Background: The Affordable Care Act (ACA) requires non-grandfathered private insurance plans, starting with plan years on or after September 23rd, 2010, to provide certain preventive care services without any cost sharing in the form of deductibles, copayments or co-insurance. This requirement may affect racial and ethnic disparities in preventive care as it provides the largest copay reduction in preventive care. Objectives: We ask whether the ACA’s free preventive care benefits are associated with a reduction in racial and ethnic disparities in the utilization of four preventive services: cholesterol screenings, colonoscopies, mammograms, and pap smears. Methods: We use a data set of over 6,000 individuals from the 2009, 2010, and 2013 Medical Expenditure Panel Surveys (MEPS). We restrict our data set only to individuals who are old enough to be eligible for each preventive service. Our difference-in-differences logistic regression model classifies privately-insured Hispanics, African Americans, and Asians as the treatment groups and 2013 as the after-policy year. Our control group consists of non-Hispanic whites on Medicaid as this program already covered preventive care services for free or at a low cost before the ACA. Results: After controlling for income, education, marital status, preferred interview language, self-reported health status, employment, having a usual source of care, age and gender, we find that the ACA is associated with increases in the probability of the median, privately-insured Hispanic person to get a colonoscopy by 3.6% and a mammogram by 3.1%, compared to a non-Hispanic white person on Medicaid. Similarly, we find that the median, privately-insured African American person’s probability of receiving these two preventive services improved by 2.3% and 2.4% compared to a non-Hispanic white person on Medicaid. We do not find any significant improvements for any racial or ethnic group for cholesterol screenings or pap smears. Furthermore, our results do not indicate any significant changes for Asians compared to non-Hispanic whites in utilizing the four preventive services. These reductions in racial/ethnic disparities are robust to reconfigurations of time periods, previous diagnosis, and residential status. Conclusions: Early effects of the ACA’s provision of free preventive care are significant for Hispanics and African Americans. Further research is needed for the later years as more individuals became aware of these benefits.

Keywords: preventive care, Affordable Care Act, cost sharing, racial disparities

Procedia PDF Downloads 119
135 Risk Mapping of Road Traffic Incidents in Greater Kampala Metropolitan Area for Planning of Emergency Medical Services

Authors: Joseph Kimuli Balikuddembe

Abstract:

Road traffic incidents (RTIs) continue to be a serious public health and development burden around the globe. Compared to high-income countries (HICs), the low and middle-income countries (LMICs) bear the heaviest brunt of RTIs. Like other LMICs, Uganda, a country located in Eastern Africa, has been experiencing a worryingly high burden of RTIs and their associated impacts. Over the years, the highest number of all the total registered RTIs in Uganda has taken place in the Greater Kampala Metropolitan Area (GKMA). This places a tremendous demand on the few existing emergency medical services (EMS) to adequately respond to those affected. In this regard, the overall objective of the study was to risk map RTIs in the GKMA so as to help in the better planning of EMS for the victims of RTIs. Other objectives included: (i) identifying the factors affecting the exposure, vulnerability and EMS capacity for the victims of RTIs; (ii) identifying the RTI prone-areas and estimating their associated risk factors; (iii) identifying the weaknesses and capacities which affect the EMS systems for RTIs; and (iv) determining the strategies and priority actions that can help to improve the EMS response for RTI victims in the GKMA. To achieve these objectives, a mixed methodological approach was used in four phrases for approximately 15 months. It employed a systematic review based on the preferred reporting items for systematic reviews and meta-data analysis guidelines; a Delphi panel technique; retrospective data analysis; and a cross-sectional method. With Uganda progressing forward as envisaged in its 'Vision 2040', the GKMA, which is the country’s political and socioeconomic epicenter, is experiencing significant changes in terms of population growth, urbanization, infrastructure development, rapid motorization and other factors. Unless appropriate actions are taken, these changes are likely to worsen the already alarming rate of RTIs in Uganda, and in turn also to put pressure on the few existing EMS and facilities to render care for those affected. Therefore, road safety vis-à-vis injury prevention measures, which are needed to reduce the burden of RTIs, should be multifaceted in nature so that they closely correlate with the ongoing dynamics that contribute to RTIs, particularly in the GKMA and Uganda as a whole.

Keywords: emergency medical services, Kampala, risk mapping, road traffic incidents

Procedia PDF Downloads 96
134 Comparing Two Unmanned Aerial Systems in Determining Elevation at the Field Scale

Authors: Brock Buckingham, Zhe Lin, Wenxuan Guo

Abstract:

Accurate elevation data is critical in deriving topographic attributes for the precision management of crop inputs, especially water and nutrients. Traditional ground-based elevation data acquisition is time consuming, labor intensive, and often inconvenient at the field scale. Various unmanned aerial systems (UAS) provide the capability of generating digital elevation data from high-resolution images. The objective of this study was to compare the performance of two UAS with different global positioning system (GPS) receivers in determining elevation at the field scale. A DJI Phantom 4 Pro and a DJI Phantom 4 RTK(real-time kinematic) were applied to acquire images at three heights, including 40m, 80m, and 120m above ground. Forty ground control panels were placed in the field, and their geographic coordinates were determined using an RTK GPS survey unit. For each image acquisition using a UAS at a particular height, two elevation datasets were generated using the Pix4D stitching software: a calibrated dataset using the surveyed coordinates of the ground control panels and an uncalibrated dataset without using the surveyed coordinates of the ground control panels. Elevation values for each panel derived from the elevation model of each dataset were compared to the corresponding coordinates of the ground control panels. The coefficient of the determination (R²) and the root mean squared error (RMSE) were used as evaluation metrics to assess the performance of each image acquisition scenario. RMSE values for the uncalibrated elevation dataset were 26.613 m, 31.141 m, and 25.135 m for images acquired at 120 m, 80 m, and 40 m, respectively, using the Phantom 4 Pro UAS. With calibration for the same UAS, the accuracies were significantly improved with RMSE values of 0.161 m, 0.165, and 0.030 m, respectively. The best results showed an RMSE of 0.032 m and an R² of 0.998 for calibrated dataset generated using the Phantom 4 RTK UAS at 40m height. The accuracy of elevation determination decreased as the flight height increased for both UAS, with RMSE values greater than 0.160 m for the datasets acquired at 80 m and 160 m. The results of this study show that calibration with ground control panels improves the accuracy of elevation determination, especially for the UAS with a regular GPS receiver. The Phantom 4 Pro provides accurate elevation data with substantial surveyed ground control panels for the 40 m dataset. The Phantom 4 Pro RTK UAS provides accurate elevation at 40 m without calibration for practical precision agriculture applications. This study provides valuable information on selecting appropriate UAS and flight heights in determining elevation for precision agriculture applications.

Keywords: unmanned aerial system, elevation, precision agriculture, real-time kinematic (RTK)

Procedia PDF Downloads 138
133 Forecasting Regional Data Using Spatial Vars

Authors: Taisiia Gorshkova

Abstract:

Since the 1980s, spatial correlation models have been used more often to model regional indicators. An increasingly popular method for studying regional indicators is modeling taking into account spatial relationships between objects that are part of the same economic zone. In 2000s the new class of model – spatial vector autoregressions was developed. The main difference between standard and spatial vector autoregressions is that in the spatial VAR (SpVAR), the values of indicators at time t may depend on the values of explanatory variables at the same time t in neighboring regions and on the values of explanatory variables at time t-k in neighboring regions. Thus, VAR is a special case of SpVAR in the absence of spatial lags, and the spatial panel data model is a special case of spatial VAR in the absence of time lags. Two specifications of SpVAR were applied to Russian regional data for 2000-2017. The values of GRP and regional CPI are used as endogenous variables. The lags of GRP, CPI and the unemployment rate were used as explanatory variables. For comparison purposes, the standard VAR without spatial correlation was used as “naïve” model. In the first specification of SpVAR the unemployment rate and the values of depending variables, GRP and CPI, in neighboring regions at the same moment of time t were included in equations for GRP and CPI respectively. To account for the values of indicators in neighboring regions, the adjacency weight matrix is used, in which regions with a common sea or land border are assigned a value of 1, and the rest - 0. In the second specification the values of depending variables in neighboring regions at the moment of time t were replaced by these values in the previous time moment t-1. According to the results obtained, when inflation and GRP of neighbors are added into the model both inflation and GRP are significantly affected by their previous values, and inflation is also positively affected by an increase in unemployment in the previous period and negatively affected by an increase in GRP in the previous period, which corresponds to economic theory. GRP is not affected by either the inflation lag or the unemployment lag. When the model takes into account lagged values of GRP and inflation in neighboring regions, the results of inflation modeling are practically unchanged: all indicators except the unemployment lag are significant at a 5% significance level. For GRP, in turn, GRP lags in neighboring regions also become significant at a 5% significance level. For both spatial and “naïve” VARs the RMSE were calculated. The minimum RMSE are obtained via SpVAR with lagged explanatory variables. Thus, according to the results of the study, it can be concluded that SpVARs can accurately model both the actual values of macro indicators (particularly CPI and GRP) and the general situation in the regions

Keywords: forecasting, regional data, spatial econometrics, vector autoregression

Procedia PDF Downloads 109
132 Using Nature-Based Solutions to Decarbonize Buildings in Canadian Cities

Authors: Zahra Jandaghian, Mehdi Ghobadi, Michal Bartko, Alex Hayes, Marianne Armstrong, Alexandra Thompson, Michael Lacasse

Abstract:

The Intergovernmental Panel on Climate Change (IPCC) report stated the urgent need to cut greenhouse gas emissions to avoid the adverse impacts of climatic changes. The United Nations has forecasted that nearly 70 percent of people will live in urban areas by 2050 resulting in a doubling of the global building stock. Given that buildings are currently recognised as emitting 40 percent of global carbon emissions, there is thus an urgent incentive to decarbonize existing buildings and to build net-zero carbon buildings. To attain net zero carbon emissions in communities in the future requires action in two directions: I) reduction of emissions; and II) removal of on-going emissions from the atmosphere once de-carbonization measures have been implemented. Nature-based solutions (NBS) have a significant role to play in achieving net zero carbon communities, spanning both emission reductions and removal of on-going emissions. NBS for the decarbonisation of buildings can be achieved by using green roofs and green walls – increasing vertical and horizontal vegetation on the building envelopes – and using nature-based materials that either emit less heat to the atmosphere thus decreasing photochemical reaction rates, or store substantial amount of carbon during the whole building service life within their structure. The NBS approach can also mitigate urban flooding and overheating, improve urban climate and air quality, and provide better living conditions for the urban population. For existing buildings, de-carbonization mostly requires retrofitting existing envelopes efficiently to use NBS techniques whereas for future construction, de-carbonization involves designing new buildings with low carbon materials as well as having the integrity and system capacity to effectively employ NBS. This paper presents the opportunities and challenges in respect to the de-carbonization of buildings using NBS for both building retrofits and new construction. This review documents the effectiveness of NBS to de-carbonize Canadian buildings, identifies the missing links to implement these techniques in cold climatic conditions, and determine a road map and immediate approaches to mitigate the adverse impacts of climate change such as urban heat islanding. Recommendations are drafted for possible inclusion in the Canadian building and energy codes.

Keywords: decarbonization, nature-based solutions, GHG emissions, greenery enhancement, buildings

Procedia PDF Downloads 67
131 Development of a Risk Disclosure Index and Examination of Its Determinants: An Empirical Study in Indian Context

Authors: M. V. Shivaani, P. K. Jain, Surendra S. Yadav

Abstract:

Worldwide regulators, practitioners and researchers view risk-disclosure as one of the most important steps that will promote corporate accountability and transparency. Recognizing this growing significance of risk disclosures, the paper first develops a risk disclosure index. Covering 69 risk items/themes, this index is developed by employing thematic content analysis and encompasses three attributes of disclosure: namely, nature (qualitative or quantitative), time horizon (backward-looking or forward-looking) and tone (no impact, positive impact or negative impact). As the focus of study is on substantive rather than symbolic disclosure, content analysis has been carried out manually. The study is based on non-financial companies of Nifty500 index and covers a ten year period from April 1, 2005 to March 31, 2015, thus yielding 3,872 annual reports for analysis. The analysis reveals that (on an average) only about 14% of risk items (i.e. about 10 out 69 risk items studied) are being disclosed by Indian companies. Risk items that are frequently disclosed are mostly macroeconomic in nature and their disclosures tend to be qualitative, forward-looking and conveying both positive and negative aspects of the concerned risk. The second objective of the paper is to gauge the factors that affect the level of disclosures in annual reports. Given the panel nature of data, and possible endogeneity amongst variables, Diff-GMM regression has been applied. The results indicate that age and size of firms have a significant positive impact on disclosure quality, whereas growth rate does not have a significant impact. Further, post-recession period (2009-2015) has witnessed significant improvement in quality of disclosures. In terms of corporate governance variables, board size, board independence, CEO duality, presence of CRO and constitution of risk management committee appear to be significant factors in determining the quality of risk disclosures. It is noteworthy that the study contributes to literature by putting forth a variant to existing disclosure indices that not only captures the quantity but also the quality of disclosures (in terms of semantic attributes). Also, the study is a first of its kind attempt in a prominent emerging market i.e. India. Therefore, this study is expected to facilitate regulators in mandating and regulating risk disclosures and companies in their endeavor to reduce information asymmetry.

Keywords: risk disclosure, voluntary disclosures, corporate governance, Diff-GMM

Procedia PDF Downloads 136
130 Selection of Suitable Reference Genes for Assessing Endurance Related Traits in a Native Pony Breed of Zanskar at High Altitude

Authors: Prince Vivek, Vijay K. Bharti, Manishi Mukesh, Ankita Sharma, Om Prakash Chaurasia, Bhuvnesh Kumar

Abstract:

High performance of endurance in equid requires adaptive changes involving physio-biochemical, and molecular responses in an attempt to regain homeostasis. We hypothesized that the identification of the suitable reference genes might be considered for assessing of endurance related traits in pony at high altitude and may ensure for individuals struggling to potent endurance trait in ponies at high altitude. A total of 12 mares of ponies, Zanskar breed, were divided into three groups, group-A (without load), group-B, (60 Kg) and group-C (80 Kg) on backpack loads were subjected to a load carry protocol, on a steep climb of 4 km uphill, and of gravel, uneven rocky surface track at an altitude of 3292 m to 3500 m (endpoint). Blood was collected before and immediately after the load carry on sodium heparin anticoagulant, and the peripheral blood mononuclear cell was separated for total RNA isolation and thereafter cDNA synthesis. Real time-PCR reactions were carried out to evaluate the mRNAs expression profile of a panel of putative internal control genes (ICGs), related to different functional classes, namely glyceraldehyde 3-phosphate dehydrogenase (GAPDH), β₂ microglobulin (β₂M), β-actin (ACTB), ribosomal protein 18 (RS18), hypoxanthine-guanine phosophoribosyltransferase (HPRT), ubiquitin B (UBB), ribosomal protein L32 (RPL32), transferrin receptor protein (TFRC), succinate dehydrogenase complex subunit A (SDHA) for normalizing the real-time quantitative polymerase chain reaction (qPCR) data of native pony’s. Three different algorithms, geNorm, NormFinder, and BestKeeper software, were used to evaluate the stability of reference genes. The result showed that GAPDH was best stable gene and stability value for the best combination of two genes was observed TFRC and β₂M. In conclusion, the geometric mean of GAPDH, TFRC and β₂M might be used for accurate normalization of transcriptional data for assessing endurance related traits in Zanskar ponies during load carrying.

Keywords: endurance exercise, ubiquitin B (UBB), β₂ microglobulin (β₂M), high altitude, Zanskar ponies, reference gene

Procedia PDF Downloads 111
129 An Exploratory Study on the Impact of Climate Change on Design Rainfalls in the State of Qatar

Authors: Abdullah Al Mamoon, Niels E. Joergensen, Ataur Rahman, Hassan Qasem

Abstract:

Intergovernmental Panel for Climate Change (IPCC) in its fourth Assessment Report AR4 predicts a more extreme climate towards the end of the century, which is likely to impact the design of engineering infrastructure projects with a long design life. A recent study in 2013 developed new design rainfall for Qatar, which provides an improved design basis of drainage infrastructure for the State of Qatar under the current climate. The current design standards in Qatar do not consider increased rainfall intensity caused by climate change. The focus of this paper is to update recently developed design rainfalls in Qatar under the changing climatic conditions based on IPCC's AR4 allowing a later revision to the proposed design standards, relevant for projects with a longer design life. The future climate has been investigated based on the climate models released by IPCC’s AR4 and A2 story line of emission scenarios (SRES) using a stationary approach. Annual maximum series (AMS) of predicted 24 hours rainfall data for both wet (NCAR-CCSM) scenario and dry (CSIRO-MK3.5) scenario for the Qatari grid points in the climate models have been extracted for three periods, current climate 2010-2039, medium term climate (2040-2069) and end of century climate (2070-2099). A homogeneous region of the Qatari grid points has been formed and L-Moments based regional frequency approach is adopted to derive design rainfalls. The results indicate no significant changes in the design rainfall on the short term 2040-2069, but significant changes are expected towards the end of the century (2070-2099). New design rainfalls have been developed taking into account climate change for 2070-2099 scenario and by averaging results from the two scenarios. IPCC’s AR4 predicts that the rainfall intensity for a 5-year return period rain with duration of 1 to 2 hours will increase by 11% in 2070-2099 compared to current climate. Similarly, the rainfall intensity for more extreme rainfall, with a return period of 100 years and duration of 1 to 2 hours will increase by 71% in 2070-2099 compared to current climate. Infrastructure with a design life exceeding 60 years should add safety factors taking the predicted effects from climate change into due consideration.

Keywords: climate change, design rainfalls, IDF, Qatar

Procedia PDF Downloads 368
128 The People's Tribunal: Empowerment by Survivors for Survivors of Child Abuse

Authors: Alan Collins

Abstract:

This study explains how The People’s Tribunal empowered survivors of child abuse. It examines how People’s tribunals can be effective mean of empowerment; the challenges of empowerment – expectation v. reality; the findings and how they reflect other inquiry findings; and the importance of listening and learning from survivors. UKCSAPT “The People’s Tribunal” was established by survivors of child sex abuse and members of civil society to investigate historic cases of institutional sex abuse. The independent inquiry, led by a panel of four judges, listened to evidence spanning four decades from survivors and experts. A common theme throughout these accounts showed that a series of institutional failures prevented abuse from being reported; and that there are clear links between children being rendered vulnerable by these failures and predatory abuse on an organised scale. It made a series of recommendations including the establishment of a permanent and open forum for victims to share experiences and give evidence, better links between mental health services and police investigations, and training for police and judiciary professionals on the effects of undisclosed sexual abuse. The main findings of the UKCSAPT report were:-There are clear links between children rendered vulnerable by institutional failures and predatory abuse on an organised scale, even if these links often remain obscure. -UK governmental institutions have failed to provide survivors with meaningful opportunities for either healing or justice. -The vital mental health needs of survivors are not being met and this undermines both their psychological recovery and access to justice. -Police and other authorities often lack the training to understand the complex reasons for the inability of survivors to immediately disclose a history of abuse. -Without far-reaching changes in institutional culture and practices, the sexual abuse of children will continue to be a significant scourge in the UK. The report also outlined a series of recommendations for improving reporting and mental health provision, and access to justice for victims were made, including: -A permanent, government-funded popular tribunal should be established to enable survivors to come forward and tell their stories. -Survivors giving evidence should be assigned an advocate to assist their access to justice. -Mental health services should be linked to police investigations to help victims disclose abuse. -Victims who fear reprisals should be provided with a channel though which to give evidence anonymously.

Keywords: empowerment, survivors, sexual, abuse

Procedia PDF Downloads 227
127 Knowledge Management Barriers: A Statistical Study of Hardware Development Engineering Teams within Restricted Environments

Authors: Nicholas S. Norbert Jr., John E. Bischoff, Christopher J. Willy

Abstract:

Knowledge Management (KM) is globally recognized as a crucial element in securing competitive advantage through building and maintaining organizational memory, codifying and protecting intellectual capital and business intelligence, and providing mechanisms for collaboration and innovation. KM frameworks and approaches have been developed and defined identifying critical success factors for conducting KM within numerous industries ranging from scientific to business, and for ranges of organization scales from small groups to large enterprises. However, engineering and technical teams operating within restricted environments are subject to unique barriers and KM challenges which cannot be directly treated using the approaches and tools prescribed for other industries. This research identifies barriers in conducting KM within Hardware Development Engineering (HDE) teams and statistically compares significance to barriers upholding the four KM pillars of organization, technology, leadership, and learning for HDE teams. HDE teams suffer from restrictions in knowledge sharing (KS) due to classification of information (national security risks), customer proprietary restrictions (non-disclosure agreement execution for designs), types of knowledge, complexity of knowledge to be shared, and knowledge seeker expertise. As KM evolved leveraging information technology (IT) and web-based tools and approaches from Web 1.0 to Enterprise 2.0, KM may also seek to leverage emergent tools and analytics including expert locators and hybrid recommender systems to enable KS across barriers of the technical teams. The research will test hypothesis statistically evaluating if KM barriers for HDE teams affect the general set of expected benefits of a KM System identified through previous research. If correlations may be identified, then generalizations of success factors and approaches may also be garnered for HDE teams. Expert elicitation will be conducted using a questionnaire hosted on the internet and delivered to a panel of experts including engineering managers, principal and lead engineers, senior systems engineers, and knowledge management experts. The feedback to the questionnaire will be processed using analysis of variance (ANOVA) to identify and rank statistically significant barriers of HDE teams within the four KM pillars. Subsequently, KM approaches will be recommended for upholding the KM pillars within restricted environments of HDE teams.

Keywords: engineering management, knowledge barriers, knowledge management, knowledge sharing

Procedia PDF Downloads 245
126 A Study of Non-Coplanar Imaging Technique in INER Prototype Tomosynthesis System

Authors: Chia-Yu Lin, Yu-Hsiang Shen, Cing-Ciao Ke, Chia-Hao Chang, Fan-Pin Tseng, Yu-Ching Ni, Sheng-Pin Tseng

Abstract:

Tomosynthesis is an imaging system that generates a 3D image by scanning in a limited angular range. It could provide more depth information than traditional 2D X-ray single projection. Radiation dose in tomosynthesis is less than computed tomography (CT). Because of limited angular range scanning, there are many properties depending on scanning direction. Therefore, non-coplanar imaging technique was developed to improve image quality in traditional tomosynthesis. The purpose of this study was to establish the non-coplanar imaging technique of tomosynthesis system and evaluate this technique by the reconstructed image. INER prototype tomosynthesis system contains an X-ray tube, a flat panel detector, and a motion machine. This system could move X-ray tube in multiple directions during the acquisition. In this study, we investigated three different imaging techniques that were 2D X-ray single projection, traditional tomosynthesis, and non-coplanar tomosynthesis. An anthropopathic chest phantom was used to evaluate the image quality. It contained three different size lesions (3 mm, 5 mm and, 8 mm diameter). The traditional tomosynthesis acquired 61 projections over a 30 degrees angular range in one scanning direction. The non-coplanar tomosynthesis acquired 62 projections over 30 degrees angular range in two scanning directions. A 3D image was reconstructed by iterative image reconstruction algorithm (ML-EM). Our qualitative method was to evaluate artifacts in tomosynthesis reconstructed image. The quantitative method was used to calculate a peak-to-valley ratio (PVR) that means the intensity ratio of the lesion to the background. We used PVRs to evaluate the contrast of lesions. The qualitative results showed that in the reconstructed image of non-coplanar scanning, anatomic structures of chest and lesions could be identified clearly and no significant artifacts of scanning direction dependent could be discovered. In 2D X-ray single projection, anatomic structures overlapped and lesions could not be discovered. In traditional tomosynthesis image, anatomic structures and lesions could be identified clearly, but there were many artifacts of scanning direction dependent. The quantitative results of PVRs show that there were no significant differences between non-coplanar tomosynthesis and traditional tomosynthesis. The PVRs of the non-coplanar technique were slightly higher than traditional technique in 5 mm and 8 mm lesions. In non-coplanar tomosynthesis, artifacts of scanning direction dependent could be reduced and PVRs of lesions were not decreased. The reconstructed image was more isotropic uniformity in non-coplanar tomosynthesis than in traditional tomosynthesis. In the future, scan strategy and scan time will be the challenges of non-coplanar imaging technique.

Keywords: image reconstruction, non-coplanar imaging technique, tomosynthesis, X-ray imaging

Procedia PDF Downloads 338
125 Empirical Studies of Indigenous Career Choice in Taiwan

Authors: Zichun Chu

Abstract:

The issue of tribal poverty has always attracted attentions. Due to social and economic difficulties, the indigenous people's personal development and tribal development have been greatly restricted. Past studies have pointed out that poverty may come from a lack of education. The United Nations Sustainable Development Goals (SDGs) also stated that if we are to solve the poverty problem, providing education widely is an important key. According to the theory of intellectual capital adaptation, “being capable” and “willing to do” are the keys of development. Therefore, we can say that the "ability" and "will" of tribal residents for their tribal development is the core concern of the tribal development. This research was designed to investigate the career choice development model of indigenous tribe people by investigating the current status of human capital, social capital, and cultural capital of tribal residents. This study collected 327 questionnaires (70% of total households) from Truku tribe to answer the research question: Did education help them for job choosing decisions from the aspects of human capital, social capital, and cultural capital in tribal status. This project highlighted the ‘single tribal research approach’ to gain an in-depth understanding of the human capital formed under the unique culture of the tribe (Truku tribe). The results show that the education level of most research participants was high school, very few high school graduates chose to further their education to college level; due to the lack of education of their parents, the social capital was limited to support them for jobs choice, most of them work for labor and service industries; however, their culture capital was comparably rich for works, the sharing culture of Taiwanese indigenous people made their work status stable. The results suggested that we should emphasize more on the development of vocational education based on the tribe’s location and resources. The self-advocacy of indigenous people should be developed so that they would gain more power on making career decisions. This research project is part of a pilot project called “INDIGENOUS PEOPLES, POVERTY, AND DEVELOPMENT,” sponsored by the National Science and Technology Council of Taiwan. If this paper were accepted to present in the 2023 ICIP, it would be lovely if a panel is formed for me and other co-researchers (Chuanju Cheng, Chih-Yuan Weng, and YiXuan Chen), for the audience will be able to get a full picture of this pilot project.

Keywords: career choices, career model, indegenous career development, indigenous education, tribe

Procedia PDF Downloads 55
124 Evaluation of Triage Performance: Nurse Practice and Problem Classifications

Authors: Atefeh Abdollahi, Maryam Bahreini, Babak Choobi Anzali, Fatemeh Rasooli

Abstract:

Introduction: Triage becomes the main part of organization of care in Emergency department (ED)s. It is used to describe the sorting of patients for treatment priority in ED. The accurate triage of injured patients has reduced fatalities and improved resource usage. Besides, the nurses’ knowledge and skill are important factors in triage decision-making. The ability to define an appropriate triage level and their need for intervention is crucial to guide to a safe and effective emergency care. Methods: This is a prospective cross-sectional study designed for emergency nurses working in four public university hospitals. Five triage workshops have been conducted every three months for emergency nurses based on a standard triage Emergency Severity Index (ESI) IV slide set - approved by Iranian Ministry of Health. Most influential items on triage performance were discussed through brainstorming in workshops which then, were peer reviewed by five emergency physicians and two head registered nurses expert panel. These factors that might distract nurse’ attention from proper decisions included patients’ past medical diseases, the natural tricks of triage and system failure. After permission had been taken, emergency nurses participated in the study and were given the structured questionnaire. Data were analysed by SPSS 21.0. Results: 92 emergency nurses enrolled in the study. 30 % of nurses reported the past history of chronic disease as the most influential confounding factor to ascertain triage level, other important factors were the history of prior admission, past history of myocardial infarction and heart failure to be 20, 17 and 11 %, respectively. Regarding the concept of difficulties in triage practice, 54.3 % reported that the discussion with patients and family members was difficult and 8.7 % declared that it is hard to stay in a single triage room whole day. Among the participants, 45.7 and 26.1 % evaluated the triage workshops as moderately and highly effective, respectively. 56.5 % reported overcrowding as the most important system-based difficulty. Nurses were mainly doubtful to differentiate between the triage levels 2 and 3 according to the ESI VI system. No significant correlation was found between the work record of nurses in triage and the uncertainty in determining the triage level and difficulties. Conclusion: The work record of nurses hardly seemed to be effective on the triage problems and issues. To correct the deficits, training workshops should be carried out, followed by continuous refresher training and supportive supervision.

Keywords: assessment, education, nurse, triage

Procedia PDF Downloads 202
123 Processes and Application of Casting Simulation and Its Software’s

Authors: Surinder Pal, Ajay Gupta, Johny Khajuria

Abstract:

Casting simulation helps visualize mold filling and casting solidification; predict related defects like cold shut, shrinkage porosity and hard spots; and optimize the casting design to achieve the desired quality with high yield. Flow and solidification of molten metals are, however, a very complex phenomenon that is difficult to simulate correctly by conventional computational techniques, especially when the part geometry is intricate and the required inputs (like thermo-physical properties and heat transfer coefficients) are not available. Simulation software is based on the process of modeling a real phenomenon with a set of mathematical formulas. It is, essentially, a program that allows the user to observe an operation through simulation without actually performing that operation. Simulation software is used widely to design equipment so that the final product will be as close to design specs as possible without expensive in process modification. Simulation software with real-time response is often used in gaming, but it also has important industrial applications. When the penalty for improper operation is costly, such as airplane pilots, nuclear power plant operators, or chemical plant operators, a mockup of the actual control panel is connected to a real-time simulation of the physical response, giving valuable training experience without fear of a disastrous outcome. The all casting simulation software has own requirements, like magma cast has only best for crack simulation. The latest generation software Auto CAST developed at IIT Bombay provides a host of functions to support method engineers, including part thickness visualization, core design, multi-cavity mold design with common gating and feeding, application of various feed aids (feeder sleeves, chills, padding, etc.), simulation of mold filling and casting solidification, automatic optimization of feeders and gating driven by the desired quality level, and what-if cost analysis. IIT Bombay has developed a set of applications for the foundry industry to improve casting yield and quality. Casting simulation is a fast and efficient solution for process for advanced tool which is the result of more than 20 years of collaboration with major industrial partners and academic institutions around the world. In this paper the process of casting simulation is studied.

Keywords: casting simulation software’s, simulation technique’s, casting simulation, processes

Procedia PDF Downloads 449
122 Prioritizing Temporary Shelter Areas for Disaster Affected People Using Hybrid Decision Support Model

Authors: Ashish Trivedi, Amol Singh

Abstract:

In the recent years, the magnitude and frequency of disasters have increased at an alarming rate. Every year, more than 400 natural disasters affect global population. A large-scale disaster leads to destruction or damage to houses, thereby rendering a notable number of residents homeless. Since humanitarian response and recovery process takes considerable time, temporary establishments are arranged in order to provide shelter to affected population. These shelter areas are vital for an effective humanitarian relief; therefore, they must be strategically planned. Choosing the locations of temporary shelter areas for accommodating homeless people is critical to the quality of humanitarian assistance provided after a large-scale emergency. There has been extensive research on the facility location problem both in theory and in application. In order to deliver sufficient relief aid within a relatively short timeframe, humanitarian relief organisations pre-position warehouses at strategic locations. However, such approaches have received limited attention from the perspective of providing shelters to disaster-affected people. In present research work, this aspect of humanitarian logistics is considered. The present work proposes a hybrid decision support model to determine relative preference of potential shelter locations by assessing them based on key subjective criteria. Initially, the factors that are kept in mind while locating potential areas for establishing temporary shelters are identified by reviewing extant literature and through consultation from a panel of disaster management experts. In order to determine relative importance of individual criteria by taking into account subjectivity of judgements, a hybrid approach of fuzzy sets and Analytic Hierarchy Process (AHP) was adopted. Further, Technique for order preference by similarity to ideal solution (TOPSIS) was applied on an illustrative data set to evaluate potential locations for establishing temporary shelter areas for homeless people in a disaster scenario. The contribution of this work is to propose a range of possible shelter locations for a humanitarian relief organization, using a robust multi criteria decision support framework.

Keywords: AHP, disaster preparedness, fuzzy set theory, humanitarian logistics, TOPSIS, temporary shelters

Procedia PDF Downloads 171
121 Comics as an Intermediary for Media Literacy Education

Authors: Ryan C. Zlomek

Abstract:

The value of using comics in the literacy classroom has been explored since the 1930s. At that point in time researchers had begun to implement comics into daily lesson plans and, in some instances, had started the development process for comics-supported curriculum. In the mid-1950s, this type of research was cut short due to the work of psychiatrist Frederic Wertham whose research seemingly discovered a correlation between comic readership and juvenile delinquency. Since Wertham’s allegations the comics medium has had a hard time finding its way back to education. Now, over fifty years later, the definition of literacy is in mid-transition as the world has become more visually-oriented and students require the ability to interpret images as often as words. Through this transition, comics has found a place in the field of literacy education research as the shift focuses from traditional print to multimodal and media literacies. Comics are now believed to be an effective resource in bridging the gap between these different types of literacies. This paper seeks to better understand what students learn from the process of reading comics and how those skills line up with the core principles of media literacy education in the United States. In the first section, comics are defined to determine the exact medium that is being examined. The different conventions that the medium utilizes are also discussed. In the second section, the comics reading process is explored through a dissection of the ways a reader interacts with the page, panel, gutter, and different comic conventions found within a traditional graphic narrative. The concepts of intersubjective acts and visualization are attributed to the comics reading process as readers draw in real world knowledge to decode meaning. In the next section, the learning processes that comics encourage are explored parallel to the core principles of media literacy education. Each principle is explained and the extent to which comics can act as an intermediary for this type of education is theorized. In the final section, the author examines comics use in his computer science and technology classroom. He lays out different theories he utilizes from Scott McCloud’s text Understanding Comics and how he uses them to break down media literacy strategies with his students. The article concludes with examples of how comics has positively impacted classrooms around the United States. It is stated that integrating comics into the classroom will not solve all issues related to literacy education but, rather, that comics can be a powerful multimodal resource for educators looking for new mediums to explore with their students.

Keywords: comics, graphics novels, mass communication, media literacy, metacognition

Procedia PDF Downloads 264
120 Tiebout and Crime: How Crime Affect the Income Tax Capacity

Authors: Nik Smits, Stijn Goeminne

Abstract:

Despite the extensive literature on the relation between crime and migration, not much is known about how crime affects the tax capacity of local communities. This paper empirically investigates whether the Flemish local income tax base yield is sensitive to changes in the local crime level. The underlying assumptions are threefold. In a Tiebout world, rational voters holding the local government accountable for the safety of its citizens, move out when the local level of security gets too much alienated from what they want it to be (first assumption). If migration is due to crime, then the more wealthy citizens are expected to move first (second assumption). Looking for a place elsewhere implies transaction costs, which the more wealthy citizens are more likely to be able to pay. As a consequence, the average income per capita and so the income distribution will be affected, which in turn, will influence the local income tax base yield (third assumption). The decreasing average income per capita, if not compensated by increasing earnings by the citizens that are staying or by the new citizens entering the locality, must result in a decreasing local income tax base yield. In the absence of a higher level governments’ compensation, decreasing local tax revenues could prove to be disastrous for a crime-ridden municipality. When communities do not succeed in forcing back the number of offences, this can be the onset of a cumulative process of urban deterioration. A spatial panel data model containing several proxies for the local level of crime in 306 Flemish municipalities covering the period 2000-2014 is used to test the relation between crime and the local income tax base yield. In addition to this direct relation, the underlying assumptions are investigated as well. Preliminary results show a modest, but positive relation between local violent crime rates and the efflux of citizens, persistent up until a 2 year lag. This positive effect is dampened by possible increasing crime rates in neighboring municipalities. The change in violent crimes -and to a lesser extent- thefts and extortions reduce the influx of citizens with a one year lag. Again this effect is diminished by external effects from neighboring municipalities, meaning that increasing crime rates in neighboring municipalities (especially violent crimes) have a positive effect on the local influx of citizens. Crime also has a depressing effect on the average income per capita within a municipality, whereas increasing crime rates in neighboring municipalities increase it. Notwithstanding the previous results, crime does not seem to significantly affect the local tax base yield. The results suggest that the depressing effect of crime on the income basis has to be compensated by a limited, but a wealthier influx of new citizens.

Keywords: crime, local taxes, migration, Tiebout mobility

Procedia PDF Downloads 288
119 Engineering Topology of Ecological Model for Orientation Impact of Sustainability Urban Environments: The Spatial-Economic Modeling

Authors: Moustafa Osman Mohammed

Abstract:

The modeling of a spatial-economic database is crucial in recitation economic network structure to social development. Sustainability within the spatial-economic model gives attention to green businesses to comply with Earth’s Systems. The natural exchange patterns of ecosystems have consistent and periodic cycles to preserve energy and materials flow in systems ecology. When network topology influences formal and informal communication to function in systems ecology, ecosystems are postulated to valence the basic level of spatial sustainable outcome (i.e., project compatibility success). These referred instrumentalities impact various aspects of the second level of spatial sustainable outcomes (i.e., participant social security satisfaction). The sustainability outcomes are modeling composite structure based on a network analysis model to calculate the prosperity of panel databases for efficiency value, from 2005 to 2025. The database is modeling spatial structure to represent state-of-the-art value-orientation impact and corresponding complexity of sustainability issues (e.g., build a consistent database necessary to approach spatial structure; construct the spatial-economic-ecological model; develop a set of sustainability indicators associated with the model; allow quantification of social, economic and environmental impact; use the value-orientation as a set of important sustainability policy measures), and demonstrate spatial structure reliability. The structure of spatial-ecological model is established for management schemes from the perspective pollutants of multiple sources through the input–output criteria. These criteria evaluate the spillover effect to conduct Monte Carlo simulations and sensitivity analysis in a unique spatial structure. The balance within “equilibrium patterns,” such as collective biosphere features, has a composite index of many distributed feedback flows. The following have a dynamic structure related to physical and chemical properties for gradual prolong to incremental patterns. While these spatial structures argue from ecological modeling of resource savings, static loads are not decisive from an artistic/architectural perspective. The model attempts to unify analytic and analogical spatial structure for the development of urban environments in a relational database setting, using optimization software to integrate spatial structure where the process is based on the engineering topology of systems ecology.

Keywords: ecological modeling, spatial structure, orientation impact, composite index, industrial ecology

Procedia PDF Downloads 22
118 DC Bus Voltage Ripple Control of Photo Voltaic Inverter in Low Voltage Ride-Trough Operation

Authors: Afshin Kadri

Abstract:

Using Renewable Energy Resources (RES) as a type of DG unit is developing in distribution systems. The connection of these generation units to existing AC distribution systems changes the structure and some of the operational aspects of these grids. Most of the RES requires to power electronic-based interfaces for connection to AC systems. These interfaces consist of at least one DC/AC conversion unit. Nowadays, grid-connected inverters must have the required feature to support the grid under sag voltage conditions. There are two curves in these conditions that show the magnitude of the reactive component of current as a function of voltage drop value and the required minimum time value, which must be connected to the grid. This feature is named low voltage ride-through (LVRT). Implementing this feature causes problems in the operation of the inverter that increases the amplitude of high-frequency components of the injected current and working out of maximum power point in the photovoltaic panel connected inverters are some of them. The important phenomenon in these conditions is ripples in the DC bus voltage that affects the operation of the inverter directly and indirectly. The losses of DC bus capacitors which are electrolytic capacitors, cause increasing their temperature and decreasing its lifespan. In addition, if the inverter is connected to the photovoltaic panels directly and has the duty of maximum power point tracking, these ripples cause oscillations around the operating point and decrease the generating energy. Using a bidirectional converter in the DC bus, which works as a buck and boost converter and transfers the ripples to its DC bus, is the traditional method to eliminate these ripples. In spite of eliminating the ripples in the DC bus, this method cannot solve the problem of reliability because it uses an electrolytic capacitor in its DC bus. In this work, a control method is proposed which uses the bidirectional converter as the fourth leg of the inverter and eliminates the DC bus ripples using an injection of unbalanced currents into the grid. Moreover, the proposed method works based on constant power control. In this way, in addition, to supporting the amplitude of grid voltage, it stabilizes its frequency by injecting active power. Also, the proposed method can eliminate the DC bus ripples in deep voltage drops, which cause increasing the amplitude of the reference current more than the nominal current of the inverter. The amplitude of the injected current for the faulty phases in these conditions is kept at the nominal value and its phase, together with the phase and amplitude of the other phases, are adjusted, which at the end, the ripples in the DC bus are eliminated, however, the generated power decreases.

Keywords: renewable energy resources, voltage drop value, DC bus ripples, bidirectional converter

Procedia PDF Downloads 45
117 Dividend Policy in Family Controlling Firms from a Governance Perspective: Empirical Evidence in Thailand

Authors: Tanapond S.

Abstract:

Typically, most of the controlling firms are relate to family firms which are widespread and important for economic growth particularly in Asian Pacific region. The unique characteristics of the controlling families tend to play an important role in determining the corporate policies such as dividend policy. Given the complexity of the family business phenomenon, the empirical evidence has been unclear on how the families behind business groups influence dividend policy in Asian markets with the prevalent existence of cross-shareholdings and pyramidal structure. Dividend policy as one of an important determinant of firm value could also be implemented in order to examine the effect of the controlling families behind business groups on strategic decisions-making in terms of a governance perspective and agency problems. The purpose of this paper is to investigate the impact of ownership structure and concentration which are influential internal corporate governance mechanisms in family firms on dividend decision-making. Using panel data and constructing a unique dataset of family ownership and control through hand-collecting information from the nonfinancial companies listed in Stock Exchange of Thailand (SET) between 2000 and 2015, the study finds that family firms with large stakes distribute higher dividends than family firms with small stakes. Family ownership can mitigate the agency problems and the expropriation of minority investors in family firms. To provide insight into the distinguish between ownership rights and control rights, this study examines specific firm characteristics including the degrees of concentration of controlling shareholders by classifying family ownership in different categories. The results show that controlling families with large deviation between voting rights and cash flow rights have more power and affect lower dividend payment. These situations become worse when second blockholders are families. To the best knowledge of the researcher, this study is the first to examine the association between family firms’ characteristics and dividend policy from the corporate governance perspectives in Thailand with weak investor protection environment and high ownership concentration. This research also underscores the importance of family control especially in a context in which family business groups and pyramidal structure are prevalent. As a result, academics and policy makers can develop markets and corporate policies to eliminate agency problem.

Keywords: agency theory, dividend policy, family control, Thailand

Procedia PDF Downloads 250
116 A Development of Science Instructional Model Based on Stem Education Approach to Enhance Scientific Mind and Problem Solving Skills for Primary Students

Authors: Prasita Sooksamran, Wareerat Kaewurai

Abstract:

STEM is an integrated teaching approach promoted by the Ministry of Education in Thailand. STEM Education is an integrated approach to teaching Science, Technology, Engineering, and Mathematics. It has been questioned by Thai teachers on the grounds of how to integrate STEM into the classroom. Therefore, the main objective of this study is to develop a science instructional model based on the STEM approach to enhance scientific mind and problem-solving skills for primary students. This study is participatory action research, and follows the following steps: 1) develop a model 2) seek the advice of experts regarding the teaching model. Developing the instructional model began with the collection and synthesis of information from relevant documents, related research and other sources in order to create prototype instructional model. 2) The examination of the validity and relevance of instructional model by a panel of nine experts. The findings were as follows: 1. The developed instructional model comprised of principles, objective, content, operational procedures and learning evaluation. There were 4 principles: 1) Learning based on the natural curiosity of primary school level children leading to knowledge inquiry, understanding and knowledge construction, 2) Learning based on the interrelation between people and environment, 3) Learning that is based on concrete learning experiences, exploration and the seeking of knowledge, 4) Learning based on the self-construction of knowledge, creativity, innovation and 5) relating their findings to real life and the solving of real-life problems. The objective of this construction model is to enhance scientific mind and problem-solving skills. Children will be evaluated according to their achievements. Lesson content is based on science as a core subject which is integrated with technology and mathematics at grade 6 level according to The Basic Education Core Curriculum 2008 guidelines. The operational procedures consisted of 6 steps: 1) Curiosity 2) Collection of data 3) Collaborative planning 4) Creativity and Innovation 5) Criticism and 6) Communication and Service. The learning evaluation is an authentic assessment based on continuous evaluation of all the material taught. 2. The experts agreed that the Science Instructional Model based on the STEM Education Approach had an excellent level of validity and relevance (4.67 S.D. 0.50).

Keywords: instructional model, STEM education, scientific mind, problem solving

Procedia PDF Downloads 166
115 Interpreter Scholarship Program That Improves Language Services in New South Wales: A Participatory Action Research Approach

Authors: Carly Copolov, Rema Nazha, Sahba C. Delshad, George Bisas

Abstract:

In New South Wales (NSW), Australia, we speak more than 275 languages and dialects. Interpreters play an indispensable role in our multicultural society by ensuring the people of NSW all enjoy the same opportunities. The NSW Government offers scholarships to enable people who speak in-demand and high priority languages to become eligible to be practicing interpreters. The NSW Interpreter Scholarship Program was launched in January 2019, targeting priority languages from new and emerging, as well as existing language communities. The program offers fully-funded scholarships to study at Technical and Further Education (TAFE), receive National Accreditation Authority for Translators and Interpreters (NAATI) certification, and be mentored and gain employment with the interpreter panel of Multicultural NSW. A Participatory Action Research approach was engaged to challenge the current system for people to become practicing interpreters in NSW. There were over 800 metro Sydney applications and close to 200 regional applications. Three courses were run through TAFE NSW (2 in metro Sydney and 1 in regional NSW). Thirty-nine students graduated from the program in 2019. The first metro Sydney location had 18 graduates complete the course in Assyrian, Burmese, Chaldean, Kurdish-Kurmanji, Nepali, and Tibetan. The second metro Sydney location had 9 graduates complete the course in Tongan, Kirundi, Mongolian and Italian. The regional location had 12 graduates who complete the course from new emerging language communities such as Kurdish-Kurmanji, Burmese, Zomi Chin, Hakha Chin, and Tigrinya. The findings showed that students were very positive about the program as the large majority said they were satisfied with the course content, they felt prepared for the NAATI test at the conclusion of the course, and they would definitely recommend the program to their friends. Also, 18 students from the 2019 cohort signed up to receive further mentoring by experienced interpreters. In 2020 it is anticipated that 3 courses will be run through TAFE NSW (2 in regional NSW and 1 in metro Sydney) to reflect the needs of new emerging language communities settling in regional areas. In conclusion, it has been demonstrated that the NSW Interpreter Scholarship Program improves the supply, quality, and use of language services in NSW, Australia, so that people who speak in-demand and high priority languages are ensured better access to crucial government services

Keywords: interpreting, emerging communities, scholarship program, Sydney

Procedia PDF Downloads 112