Search results for: coarse rubber aggregate
124 The Comparison Study of Human Microbiome in Chronic Rhinosinusitis between Adults and Children
Authors: Il Ho Park, Joong Seob Lee, Sung Hun Kang, Jae-Min Shin, Il Seok Park, Seok Min Hong, Seok Jin Hong
Abstract:
Introduction: The human microbiota is the aggregate of microorganisms, and the bacterial microbiome of the human digestive tract contributes to both health and disease. In health, bacteria are key components in the development of mucosal barrier function and in innate and adaptive immune responses, and they also work to suppress the establishment of pathogens. In human upper airway, the sinonasal microbiota might play an important role in chronic rhinosinusitis (CRS). The purpose of this study is to investigate the human upper airway microbiome in CRS patients and to compare the sinonasal microbiome of adults with children. Materials and methods: A total of 19 samples from 19 patients (Group1; 9 CRS in children, aged 5 to 14 years versus Group 2; 10 CRS in adults aged 21 to 59 years) were examined. Swabs were collected from the middle meatus and/or anterior ethmoid region under general anesthesia during endoscopic sinus surgery or tonsillectomy. After DNA extraction from swab samples, we analysed bacterial microbiome consortia using 16s rRNA gene sequencing approach (the Illumina MiSeq platform). Results: In this study, relatively abundance of the six bacterial phyla and tremendous genus and species found in substantial amounts in the individual sinus swab samples, include Corynebacterium, Hemophilus, Moraxella, and Streptococcus species. Anaerobes like Fusobacterium and Bacteroides were abundantly present in the children group, Bacteroides and Propionibacterium were present in adults group. In genus, Haemophilus was the most common CRS microbiome in children and Corynebacterium was the most common CRS microbiome in adults. Conclusions: Our results show the diversity of human upper airway microbiome, and the findings will suggest that CRS is a polymicrobial infection. The Corynebacterium and Hemophilus may live as commensals on mucosal surfaces of sinus in the upper respiratory tract. The further study will be needed for analysis of microbiome-human interactions in upper airway and CRS.Keywords: microbiome, upper airway, chronic rhinosinusitis, adult and children
Procedia PDF Downloads 126123 When the Rubber Hits the Road: The Enactment of Well-Intentioned Language Policy in Digital vs. In Situ Spaces on Washington, DC Public Transportation
Authors: Austin Vander Wel, Katherin Vargas Henao
Abstract:
Washington, DC, is a city in which Spanish, along with several other minority languages, is prevalent not only among tourists but also those living within city limits. In response to this linguistic diversity and DC’s adoption of the Language Access Act in 2004, the Washington Metropolitan Area Transit Authority (WMATA) committed to addressing the need for equal linguistic representation and established a five-step plan to provide the best multilingual information possible for public transportation users. The current study, however, strongly suggests that this de jure policy does not align with the reality of Spanish’s representation on DC public transportation–although perhaps doing so in an unexpected way. In order to investigate Spanish’s de facto representation and how it contrasts with de jure policy, this study implements a linguistic landscapes methodology that takes critical language-policy as its theoretical framework (Tollefson, 2005). Specifically concerning de facto representation, it focuses on the discrepancies between digital spaces and the actual physical spaces through which users travel. These digital vs. in situ conditions are further analyzed by separately addressing aural and visual modalities. In digital spaces, data was collected from WMATA’s website (visual) and their bilingual hotline (aural). For in situ spaces, both bus and metro areas of DC public transportation were explored, with signs comprising the visual modality and recordings, driver announcements, and interactions with metro kiosk workers comprising the aural modality. While digital spaces were considered to successfully fulfill WMATA’s commitment to representing Spanish as outlined in the de jure policy, physical spaces show a large discrepancy between what is said and what is done, particularly regarding the bus system, in addition to the aural modality overall. These discrepancies in situ spaces place Spanish speakers at a clear disadvantage, demanding additional resources and knowledge on the part of residents with limited or no English proficiency in order to have equal access to this public good. Based on our critical language-policy analysis, while Spanish is represented as a right in the de jure policy, its implementation in situ clearly portrays Spanish as a problem since those seeking bilingual information can not expect it to be present when and where they need it most (Ruíz, 1984; Tollefson, 2005). This study concludes with practical, data-based steps to improve the current situation facing DC’s public transportation context and serves as a model for responding to inadequate enactment of de jure policy in other language policy settings.Keywords: Urban landscape, language access, critical-language policy, spanish, public transportation
Procedia PDF Downloads 72122 Measurement of in-situ Horizontal Root Tensile Strength of Herbaceous Vegetation for Improved Evaluation of Slope Stability in the Alps
Authors: Michael T. Lobmann, Camilla Wellstein, Stefan Zerbe
Abstract:
Vegetation plays an important role for the stabilization of slopes against erosion processes, such as shallow erosion and landslides. Plant roots reinforce the soil, increase soil cohesion and often cross possible shear planes. Hence, plant roots reduce the risk of slope failure. Generally, shrub and tree roots penetrate deeper into the soil vertically, while roots of forbs and grasses are concentrated horizontally in the topsoil and organic layer. Therefore, shrubs and trees have a higher potential for stabilization of slopes with deep soil layers than forbs and grasses. Consequently, research mainly focused on the vertical root effects of shrubs and trees. Nevertheless, a better understanding of the stabilizing effects of grasses and forbs is needed for better evaluation of the stability of natural and artificial slopes with herbaceous vegetation. Despite the importance of vertical root effects, field observations indicate that horizontal root effects also play an important role for slope stabilization. Not only forbs and grasses, but also some shrubs and trees form tight horizontal networks of fine and coarse roots and rhizomes in the topsoil. These root networks increase soil cohesion and horizontal tensile strength. Available methods for physical measurements, such as shear-box tests, pullout tests and singular root tensile strength measurement can only provide a detailed picture of vertical effects of roots on slope stabilization. However, the assessment of horizontal root effects is largely limited to computer modeling. Here, a method for measurement of in-situ cumulative horizontal root tensile strength is presented. A traction machine was developed that allows fixation of rectangular grass sods (max. 30x60cm) on the short ends with a 30x30cm measurement zone in the middle. On two alpine grass slopes in South Tyrol (northern Italy), 30x60cm grass sods were cut out (max. depth 20cm). Grass sods were pulled apart measuring the horizontal tensile strength over 30cm width over the time. The horizontal tensile strength of the sods was measured and compared for different soil depths, hydrological conditions, and root physiological properties. The results improve our understanding of horizontal root effects on slope stabilization and can be used for improved evaluation of grass slope stability.Keywords: grassland, horizontal root effect, landslide, mountain, pasture, shallow erosion
Procedia PDF Downloads 166121 Detection of Phoneme [S] Mispronounciation for Sigmatism Diagnosis in Adults
Authors: Michal Krecichwost, Zauzanna Miodonska, Pawel Badura
Abstract:
The diagnosis of sigmatism is mostly based on the observation of articulatory organs. It is, however, not always possible to precisely observe the vocal apparatus, in particular in the oral cavity of the patient. Speech processing can allow to objectify the therapy and simplify the verification of its progress. In the described study the methodology for classification of incorrectly pronounced phoneme [s] is proposed. The recordings come from adults. They were registered with the speech recorder at the sampling rate of 44.1 kHz and the resolution of 16 bit. The database of pathological and normative speech has been collected for the study including reference assessments provided by the speech therapy experts. Ten adult subjects were asked to simulate a certain type of stigmatism under the speech therapy expert supervision. In the recordings, the analyzed phone [s] was surrounded by vowels, viz: ASA, ESE, ISI, SPA, USU, YSY. Thirteen MFCC (mel-frequency cepstral coefficients) and RMS (root mean square) values are calculated within each frame being a part of the analyzed phoneme. Additionally, 3 fricative formants along with corresponding amplitudes are determined for the entire segment. In order to aggregate the information within the segment, the average value of each MFCC coefficient is calculated. All features of other types are aggregated by means of their 75th percentile. The proposed method of features aggregation reduces the size of the feature vector used in the classification. Binary SVM (support vector machine) classifier is employed at the phoneme recognition stage. The first group consists of pathological phones, while the other of the normative ones. The proposed feature vector yields classification sensitivity and specificity measures above 90% level in case of individual logo phones. The employment of a fricative formants-based information improves the sole-MFCC classification results average of 5 percentage points. The study shows that the employment of specific parameters for the selected phones improves the efficiency of pathology detection referred to the traditional methods of speech signal parameterization.Keywords: computer-aided pronunciation evaluation, sibilants, sigmatism diagnosis, speech processing
Procedia PDF Downloads 283120 Characterization of Aerosol Particles in Ilorin, Nigeria: Ground-Based Measurement Approach
Authors: Razaq A. Olaitan, Ayansina Ayanlade
Abstract:
Understanding aerosol properties is the main goal of global research in order to lower the uncertainty associated with climate change in the trends and magnitude of aerosol particles. In order to identify aerosol particle types, optical properties, and the relationship between aerosol properties and particle concentration between 2019 and 2021, a study conducted in Ilorin, Nigeria, examined the aerosol robotic network's ground-based sun/sky scanning radiometer. The AERONET algorithm version 2 was utilized to retrieve monthly data on aerosol optical depth and angstrom exponent. The version 3 algorithm, which is an almucantar level 2 inversion, was employed to retrieve daily data on single scattering albedo and aerosol size distribution. Excel 2016 was used to analyze the data's monthly, seasonal, and annual mean averages. The distribution of different types of aerosols was analyzed using scatterplots, and the optical properties of the aerosol were investigated using pertinent mathematical theorems. To comprehend the relationships between particle concentration and properties, correlation statistics were employed. Based on the premise that aerosol characteristics must remain constant in both magnitude and trend across time and space, the study's findings indicate that the types of aerosols identified between 2019 and 2021 are as follows: 29.22% urban industrial (UI) aerosol type, 37.08% desert (D) aerosol type, 10.67% biomass burning (BB), and 23.03% urban mix (Um) aerosol type. Convective wind systems, which frequently carry particles as they blow over long distances in the atmosphere, have been responsible for the peak-of-the-columnar aerosol loadings, which were observed during August of the study period. The study has shown that while coarse mode particles dominate, fine particles are increasing in seasonal and annual trends. Burning biomass and human activities in the city are linked to these trends. The study found that the majority of particles are highly absorbing black carbon, with the fine mode having a volume median radius of 0.08 to 0.12 meters. The investigation also revealed that there is a positive coefficient of correlation (r = 0.57) between changes in aerosol particle concentration and changes in aerosol properties. Human activity is rapidly increasing in Ilorin, causing changes in aerosol properties, indicating potential health risks from climate change and human influence on geological and environmental systems.Keywords: aerosol loading, aerosol types, health risks, optical properties
Procedia PDF Downloads 62119 Frailty Patterns in the US and Implications for Long-Term Care
Authors: Joelle Fong
Abstract:
Older persons are at greatest risk of becoming frail. As survival to the age of 80 and beyond continues to increase, the health and frailty of older Americans has garnered much recent attention among policy makers and healthcare administrators. This paper examines patterns in old-age frailty within a multistate actuarial model that characterizes the stochastic process of biological ageing. Using aggregate population-level U.S. mortality data, we implement a stochastic aging model to examine cohort trends and gender differences in frailty distributions for older Americans born 1865 – 1894. The stochastic ageing model, which draws from the fields of actuarial science and gerontology, is well-established in the literature. The implications for public health insurance programs are also discussed. Our results suggest that, on average, women tend to be frailer than men at older ages and reveal useful insights about the magnitude of the male-female differential at critical age points. Specifically, we note that the frailty statuses of males and females are actually quite comparable from ages 65 to 80. Beyond age 80, however, the frailty levels start to diverge considerably implying that women are moving quicker into worse states of health than men. Tracking average frailty by gender over 30 successive birth cohorts, we also find that frailty levels for both genders follow a distinct peak-and-trough pattern. For instance, frailty among 85-year old American survivors increased in years 1954-1963, decreased in years 1964-1971, and again started to increase in years 1972-1979. A number of factors may have accounted for these cohort differences including differences in cohort life histories, differences in disease prevalence, differences in lifestyle and behavior, differential access to medical advances, as well as changes in environmental risk factors over time. We conclude with a discussion on the implications of our findings on spending for long-term care programs within the broader health insurance system.Keywords: actuarial modeling, cohort analysis, frail elderly, health
Procedia PDF Downloads 244118 Regional Review of Outcome of Cervical Smears Reported with Cytological Features of Non Cervical Glandular Neoplasia
Authors: Uma Krishnamoorthy, Vivienne Beavers, Janet Marshall
Abstract:
Introduction: Cervical cytology showing features raising the suspicion of non cervical glandular neoplasia are reported as code 0 under the United Kingdom National Health Service Cervical screening programme ( NHSCSP). As the suspicion is regarding non cervical neoplasia, smear is reported as normal and patient informed that cervical screening result is normal. GP receives copy of results where it states further referral is indicated in small font within text of report. Background: There were several incidents of delayed diagnosis of endometrial cancer in Lancashire which prompted this Northwest Regional review to enable an understanding of underlying pathology outcome of code zero smears to raise awareness and also to review whether further action on wording of smear results was indicated to prevent such delay. Methodology: All Smears reported at the Manchester cytology centre who process cytology for Lancashire population from March 2013 to March 2014 were reviewed and histological diagnosis outcome of women in whom smear was reported as code zero was reviewed retrospectively . Results: Total smears reported by the cytology centre during this period was approximately 109400. Reports issued with result code 0 among this during this time period was 49.Results revealed that among three fourth (37) of women with code zero smear (N=49), evidence of underlying pathology of non cervical origin was confirmed. Of this, 73 % (36) were due to endometrial pathology with 49 % (24) endometrial carcinoma, 12 % (6)polyp, 4 % atypical endometrial hyperplasia (2), 6 % endometrial hyperplasia without atypia (3), and 2 % adenomyosis (1 case) and 2 % ( 1 case) due to ovarian adenocarcinoma. Conclusion: This review demonstrated that more than half (51 %) of women with a code 0 smear report were diagnosed with underlying carcinoma and 75 % had a confirmed underlying pathology contributory to code 0 smear findings. Recommendations and Action Plan: A local rapid access referral and management pathway for this group of women was implemented as a result of this in our unit. The findings and Pathway were shared with other regional units served by the cytology centre through the Pan Lancashire cervical screening board and through the Cytology centre. Locally, the smear report wording was updated to include a rubber stamp/ print in "Red Bold letters" stating that " URGENT REFERRAL TO GYNAECOLOGY IS INDICATED". Findings were also shared through the Pan Lancashire board with National cervical screening programme board, and revisions to wording of code zero smear reports to highlight the need for Urgent referral has now been agreed at National level to be implemented.Keywords: code zero smears, endometrial cancer, non cervical glandular neoplasia, ovarian cancer
Procedia PDF Downloads 297117 An Examination of Factors Leading to Knowledge-Sharing Behavior of Sri Lankan Bankers
Authors: Eranga N. Somaratna, Pradeep Dharmadasa
Abstract:
In the current competitive environment, the factors leading to organization success are not limited to the investment of capital, labor, and raw material, but in the ability of knowledge innovation from all the members of an organization. However, knowledge on its own cannot provide organizations with its promised benefits unless it is shared, as organizations are increasingly experiencing unsuccessful knowledge sharing efforts. In such a backdrop and due to the dearth of research in this area in the South Asian context, the study set forth to develop an understanding of the factors that influence knowledge-sharing behavior within an organizational framework, using widely accepted social psychology theories. The purpose of the article is to discover the determinants of knowledge-sharing intention and actual knowledge sharing behaviors of bank employees in Sri Lanka using an aggregate model. Knowledge sharing intentions are widely discussed in literature through the application of Ajzen’s Theory of planned behavior (TPB) and Theory of Social Capital (SCT) separately. Both the theories are rich to explain knowledge sharing intention of workers with limitations. The study, therefore, combines the TPB with SCT in developing its conceptual model. Data were collected through a self-administrated paper-based questionnaire of 199 bank managers from 6 public and private banks of Sri Lanka and analyzed the suggested research model using Structural Equation Modelling (SEM). The study supported six of the nine hypotheses, where Attitudes toward Knowledge Sharing Behavior, Perceived Behavioral Control, Trust, Anticipated Reciprocal Relationships and Actual Knowledge Sharing Behavior were supported while Organizational Climate, Sense of Self-Worth and Anticipated Extrinsic Rewards were not, in determining knowledge sharing intentions. Furthermore, the study investigated the effect of demographic factors of bankers (age, gender, position, education, and experiences) to the actual knowledge sharing behavior. However, findings should be confirmed using a larger sample, as well as through cross-sectional studies. The results highlight the need for theoreticians to combined TPB and SCT in understanding knowledge workers’ intentions and actual behavior; and for practitioners to focus on the perceptions and needs of the individual knowledge worker and the need to cultivate a culture of sharing knowledge in the organization for their mutual benefit.Keywords: banks, employees behavior, knowledge management, knowledge sharing
Procedia PDF Downloads 132116 Dataset Quality Index:Development of Composite Indicator Based on Standard Data Quality Indicators
Authors: Sakda Loetpiparwanich, Preecha Vichitthamaros
Abstract:
Nowadays, poor data quality is considered one of the majority costs for a data project. The data project with data quality awareness almost as much time to data quality processes while data project without data quality awareness negatively impacts financial resources, efficiency, productivity, and credibility. One of the processes that take a long time is defining the expectations and measurements of data quality because the expectation is different up to the purpose of each data project. Especially, big data project that maybe involves with many datasets and stakeholders, that take a long time to discuss and define quality expectations and measurements. Therefore, this study aimed at developing meaningful indicators to describe overall data quality for each dataset to quick comparison and priority. The objectives of this study were to: (1) Develop a practical data quality indicators and measurements, (2) Develop data quality dimensions based on statistical characteristics and (3) Develop Composite Indicator that can describe overall data quality for each dataset. The sample consisted of more than 500 datasets from public sources obtained by random sampling. After datasets were collected, there are five steps to develop the Dataset Quality Index (SDQI). First, we define standard data quality expectations. Second, we find any indicators that can measure directly to data within datasets. Thirdly, each indicator aggregates to dimension using factor analysis. Next, the indicators and dimensions were weighted by an effort for data preparing process and usability. Finally, the dimensions aggregate to Composite Indicator. The results of these analyses showed that: (1) The developed useful indicators and measurements contained ten indicators. (2) the developed data quality dimension based on statistical characteristics, we found that ten indicators can be reduced to 4 dimensions. (3) The developed Composite Indicator, we found that the SDQI can describe overall datasets quality of each dataset and can separate into 3 Level as Good Quality, Acceptable Quality, and Poor Quality. The conclusion, the SDQI provide an overall description of data quality within datasets and meaningful composition. We can use SQDI to assess for all data in the data project, effort estimation, and priority. The SDQI also work well with Agile Method by using SDQI to assessment in the first sprint. After passing the initial evaluation, we can add more specific data quality indicators into the next sprint.Keywords: data quality, dataset quality, data quality management, composite indicator, factor analysis, principal component analysis
Procedia PDF Downloads 139115 Experimental Study of Vibration Isolators Made of Expanded Cork Agglomerate
Authors: S. Dias, A. Tadeu, J. Antonio, F. Pedro, C. Serra
Abstract:
The goal of the present work is to experimentally evaluate the feasibility of using vibration isolators made of expanded cork agglomerate. Even though this material, also known as insulation cork board (ICB), has mainly been studied for thermal and acoustic insulation purposes, it has strong potential for use in vibration isolation. However, the adequate design of expanded cork blocks vibration isolators will depend on several factors, such as excitation frequency, static load conditions and intrinsic dynamic behavior of the material. In this study, transmissibility tests for different static and dynamic loading conditions were performed in order to characterize the material. Since the material’s physical properties can influence the vibro-isolation performance of the blocks (in terms of density and thickness), this study covered four mass density ranges and four block thicknesses. A total of 72 expanded cork agglomerate specimens were tested. The test apparatus comprises a vibration exciter connected to an excitation mass that holds the test specimen. The test specimens under characterization were loaded successively with steel plates in order to obtain results for different masses. An accelerometer was placed at the top of these masses and at the base of the excitation mass. The test was performed for a defined frequency range, and the amplitude registered by the accelerometers was recorded in time domain. For each of the signals (signal 1- vibration of the excitation mass, signal 2- vibration of the loading mass) a fast Fourier transform (FFT) was applied in order to obtain the frequency domain response. For each of the frequency domain signals, the maximum amplitude reached was registered. The ratio between the amplitude (acceleration) of signal 2 and the amplitude of signal 1, allows the calculation of the transmissibility for each frequency. Repeating this procedure allowed us to plot a transmissibility curve for a certain frequency range. A number of transmissibility experiments were performed to assess the influence of changing the mass density and thickness of the expanded cork blocks and the experimental conditions (static load and frequency of excitation). The experimental transmissibility tests performed in this study showed that expanded cork agglomerate blocks are a good option for mitigating vibrations. It was concluded that specimens with lower mass density and larger thickness lead to better performance, with higher vibration isolation and a larger range of isolated frequencies. In conclusion, the study of the performance of expanded cork agglomerate blocks presented herein will allow for a more efficient application of expanded cork vibration isolators. This is particularly relevant since this material is a more sustainable alternative to other commonly used non-environmentally friendly products, such as rubber.Keywords: expanded cork agglomerate, insulation cork board, transmissibility tests, sustainable materials, vibration isolators
Procedia PDF Downloads 332114 Bank Liquidity Creation in a Dual Banking System: An Empirical Investigation
Authors: Lianne M. Q. Lee, Mohammed Sharaf Shaiban
Abstract:
The importance of bank liquidity management took center stage as policy makers promoted a more resilient global banking system after the market turmoil of 2007. The growing recognition of Islamic banks’ function of intermediating funds in the economy warrants the need to investigate its balance sheet structure which is distinct from its conventional counterparts. Given that asymmetric risk, transformation is inevitable; Islamic banks need to identify the liquidity risk within their distinctive balance sheet structure. Thus, there is a strong need to quantify and assess the liquidity position to ensure proper functioning of a financial institution. It is vital to measure bank liquidity because liquid banks face less liquidity risk. We examine this issue by using two alternative quantitative measures of liquidity creation “cat fat” and “cat nonfat” constructed by Berger and Bouwman (2009). “Cat fat” measures all on balance sheet items including off balance sheet, whilst the latter measures only on balance sheet items. Liquidity creation is measured over the period 2007-2014 in 14 countries where Islamic and conventional commercial banks coexist. Also, separately by bank size class as empirical studies have shown that liquidity creation varies by bank size. An interesting and important finding shows that all size class of Islamic banks, on average have increased creation of aggregate liquidity in real dollar terms over the years for both liquidity creation measures especially for large banks indicating that Islamic banks actually generates more liquidity to the economy compared to its conventional counterparts, including from off-balance sheet items. The liquidity creation for off-balance sheets by conventional banks may have been affected by the global financial crisis when derivatives markets were severely hit. The results also suggest that Islamic banks have the higher volume of assets and deposits and that borrowing/issues of bonds are less in Islamic banks compared to conventional banks because most products are interest-based. As Islamic banks appear to create more liquidity than conventional banks under both measures, it translates that the development of Islamic banking is significant over the decades since its inception. This finding is encouraging as, despite Islamic banking’s overall size, it represents growth opportunities for these countries.Keywords: financial institution, liquidity creation, liquidity risk, policy and regulation
Procedia PDF Downloads 349113 Is Electricity Consumption Stationary in Turkey?
Authors: Eyup Dogan
Abstract:
The number of research articles analyzing the integration properties of energy variables has rapidly increased in the energy literature for about a decade. The stochastic behaviors of energy variables are worth knowing due to several reasons. For instance, national policies to conserve or promote energy consumption, which should be taken as shocks to energy consumption, will have transitory effects in energy consumption if energy consumption is found to be stationary in one country. Furthermore, it is also important to know the order of integration to employ an appropriate econometric model. Despite being an important subject for applied energy (economics) and having a huge volume of studies, several known limitations still exist with the existing literature. For example, many of the studies use aggregate energy consumption and national level data. In addition, a huge part of the literature is either multi-country studies or solely focusing on the U.S. This is the first study in the literature that considers a form of energy consumption by sectors at sub-national level. This research study aims at investigating unit root properties of electricity consumption for 12 regions of Turkey by four sectors in addition to total electricity consumption for the purpose of filling the mentioned limits in the literature. In this regard, we analyze stationarity properties of 60 cases . Because the use of multiple unit root tests make the results robust and consistent, we apply Dickey-Fuller unit root test based on Generalized Least Squares regression (DFGLS), Phillips-Perron unit root test (PP) and Zivot-Andrews unit root test with one endogenous structural break (ZA). The main finding of this study is that electricity consumption is trend stationary in 7 cases according to DFGLS and PP, whereas it is stationary process in 12 cases when we take into account the structural change by applying ZA. Thus, shocks to electricity consumption have transitory effects in those cases; namely, agriculture in region 1, region 4 and region 7, industrial in region 5, region 8, region 9, region 10 and region 11, business in region 4, region 7 and region 9, total electricity consumption in region 11. Regarding policy implications, policies to decrease or stimulate the use of electricity have a long-run impact on electricity consumption in 80% of cases in Turkey given that 48 cases are non-stationary process. On the other hand, the past behavior of electricity consumption can be used to predict the future behavior of that in 12 cases only.Keywords: unit root, electricity consumption, sectoral data, subnational data
Procedia PDF Downloads 410112 Income and Factor Analysis of Small Scale Broiler Production in Imo State, Nigeria
Authors: Ubon Asuquo Essien, Okwudili Bismark Ibeagwa, Daberechi Peace Ubabuko
Abstract:
The Broiler Poultry subsector is dominated by small scale production with low aggregate output. The high cost of inputs currently experienced in Nigeria tends to aggravate the situation; hence many broiler farmers struggle to break-even. This study was designed to examine income and input factors in small scale deep liter broiler production in Imo state, Nigeria. Specifically, the study examined; socio-economic characteristics of small scale deep liter broiler producing Poultry farmers; estimate cost and returns of broiler production in the area; analyze input factors in broiler production in the area and examined marketability, age and profitability of the enterprise. A multi-stage sampling technique was adopted in selecting 60 small scale broiler farmers who use deep liter system from 6 communities through the use of structured questionnaire. The socioeconomic characteristics of the broiler farmers and the profitability/ marketability age of the birds were described using descriptive statistical tools such as frequencies, means and percentages. Gross margin analysis was used to analyze the cost and returns to broiler production, while Cobb Douglas production function was employed to analyze input factors in broiler production. The result of the study revealed that the cost of feed (P<0.1), deep liter material (P<0.05) and medication (P<0.05) had a significant positive relationship with the gross return of broiler farmers in the study area, while cost of labour, fuel and day old chicks were not significant. Furthermore, Gross profit margin of the farmers who market their broiler at the 8th week of rearing was 80.7%; and 78.7% and 60.8% for farmers who market at the 10th week and 12th week of rearing, respectively. The business is, therefore, profitable but at varying degree. Government and Development partners should make deliberate efforts to curb the current rise in the prices of poultry feeds, drugs and timber materials used as bedding so as to widen the profit margin and encourage more farmers to go into the business. The farmers equally need more technical assistance from extension agents with regards to timely and profitable marketing.Keywords: broilers, factor analysis, income, small scale
Procedia PDF Downloads 80111 Simulation of the Flow in a Circular Vertical Spillway Using a Numerical Model
Authors: Mohammad Zamani, Ramin Mansouri
Abstract:
Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. A circular vertical spillway with various inlet forms is very effective when there is not enough space for the other spillway. Hydraulic flow in a vertical circular spillway is divided into three groups: free, orifice, and under pressure (submerged). In this research, the hydraulic flow characteristics of a Circular Vertical Spillway are investigated with the CFD model. Two-dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k-ε and k-ω, were chosen to model Reynolds shear stress term. The power law scheme was used for the discretization of momentum, k, ε, and ω equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. In this study, three types of computational grids (coarse, intermediate, and fine) were used to discriminate the simulation environment. In order to simulate the flow, the k-ε (Standard, RNG, Realizable) and k-ω (standard and SST) models were used. Also, in order to find the best wall function, two types, standard wall, and non-equilibrium wall function, were investigated. The laminar model did not produce satisfactory flow depth and velocity along the Morning-Glory spillway. The results of the most commonly used two-equation turbulence models (k-ε and k-ω) were identical. Furthermore, the standard wall function produced better results compared to the non-equilibrium wall function. Thus, for other simulations, the standard k-ε with the standard wall function was preferred. The comparison criterion in this study is also the trajectory profile of jet water. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k-ε (Standard) has the most consistent results with experimental results. When the jet gets closer to the end of the basin, the computational results increase with the numerical results of their differences. The mesh with 10602 nodes, turbulent model k-ε standard and the standard wall function, provide the best results for modeling the flow in a vertical circular Spillway. There was a good agreement between numerical and experimental results in the upper and lower nappe profiles. In the study of water level over crest and discharge, in low water levels, the results of numerical modeling are good agreement with the experimental, but with the increasing water level, the difference between the numerical and experimental discharge is more. In the study of the flow coefficient, by decreasing in P/R ratio, the difference between the numerical and experimental result increases.Keywords: circular vertical, spillway, numerical model, boundary conditions
Procedia PDF Downloads 86110 Nuclear Resistance Movements: Case Study of India
Authors: Shivani Yadav
Abstract:
The paper illustrates dynamics of nuclear resistance movements in India and how peoples’ power rises in response to subversion of justice and suppression of human rights. The need for democratizing nuclear policy runs implicit through the demands of the people protesting against nuclear programmes. The paper analyses the rationale behind developing nuclear energy according to the mainstream development model adopted by the state. Whether the prevalent nuclear discourse includes people’s ambitions and addresses local concerns or not is discussed. Primarily, the nuclear movements across India comprise of two types of actors i.e. the local population as well as the urban interlocutors. The first type of actor is the local population comprising of the people who are residing in the vicinity of the nuclear site and are affected by its construction, presence and operation. They have very immediate concerns against nuclear energy projects but also have an ideological stand against producing nuclear energy. The other types of actors are the urban interlocutors, who are the intellectuals and nuclear activists who have a principled stand against nuclear energy and help to aggregate the aims and goals of the movement on various platforms. The paper focuses on the nuclear resistance movements at five sites in India- Koodankulam (Tamil Nadu), Jaitapur (Maharashtra), Haripur (West Bengal), Mithivirdi (Gujrat) and Gorakhpur (Haryana). The origin, development, role of major actors and mass media coverage of all these movements are discussed in depth. Major observations from the Indian case include: first, nuclear policy discussions in India are confined to elite circles; secondly, concepts like national security and national interest are used to suppress dissent against mainstream policies; and thirdly, India’s energy policies focus on economic concerns while ignoring the human implications of such policies. In conclusion, the paper observes that the anti-nuclear movements question not just the feasibility of nuclear power but also its exclusionary nature when it comes to people’s participation in policy making, endangering the ecology, violation of human rights, etc. The character of these protests is non-violent with an aim to produce more inclusive policy debates and democratic dialogues.Keywords: anti-nuclear movements, Koodankulam nuclear power plant, non-violent resistance, nuclear resistance movements, social movements
Procedia PDF Downloads 147109 Open Source Cloud Managed Enterprise WiFi
Authors: James Skon, Irina Beshentseva, Michelle Polak
Abstract:
Wifi solutions come in two major classes. Small Office/Home Office (SOHO) WiFi, characterized by inexpensive WiFi routers, with one or two service set identifiers (SSIDs), and a single shared passphrase. These access points provide no significant user management or monitoring, and no aggregation of monitoring and control for multiple routers. The other solution class is managed enterprise WiFi solutions, which involve expensive Access Points (APs), along with (also costly) local or cloud based management components. These solutions typically provide portal based login, per user virtual local area networks (VLANs), and sophisticated monitoring and control across a large group of APs. The cost for deploying and managing such managed enterprise solutions is typically about 10 fold that of inexpensive consumer APs. Low revenue organizations, such as schools, non-profits, non-government organizations (NGO's), small businesses, and even homes cannot easily afford quality enterprise WiFi solutions, though they may need to provide quality WiFi access to their population. Using available lower cost Wifi solutions can significantly reduce their ability to provide reliable, secure network access. This project explored and created a new approach for providing secured managed enterprise WiFi based on low cost hardware combined with both new and existing (but modified) open source software. The solution provides a cloud based management interface which allows organizations to aggregate the configuration and management of small, medium and large WiFi solutions. It utilizes a novel approach for user management, giving each user a unique passphrase. It provides unlimited SSID's across an unlimited number of WiFI zones, and the ability to place each user (and all their devices) on their own VLAN. With proper configuration it can even provide user local services. It also allows for users' usage and quality of service to be monitored, and for users to be added, enabled, and disabled at will. As inferred above, the ultimate goal is to free organizations with limited resources from the expense of a commercial enterprise WiFi, while providing them with most of the qualities of such a more expensive managed solution at a fraction of the cost.Keywords: wifi, enterprise, cloud, managed
Procedia PDF Downloads 97108 Using Biofunctool® Index to Assess Soil Quality after Eight Years of Conservation Agriculture in New Caledonia
Authors: Remy Kulagowski, Tobias Sturm, Audrey Leopold, Aurelie Metay, Josephine Peigne, Alexis Thoumazeau, Alain Brauman, Bruno Fogliani, Florent Tivet
Abstract:
A major challenge for agriculture is to enhance productivity while limiting the impact on the environment. Conservation agriculture (CA) is one strategy whereby both sustainability and productivity can be achieved by preserving and improving the soil quality. Soils provide and regulate a large number of ecosystem services (ES) such as agricultural productivity and climate change adaptation and mitigation. The aim of this study is to assess the impacts of contrasted CA crop management on soil functions for maize (Zea mays L.) cultivation in an eight years field experiment (2010-2018). The study included two CA practices: direct seeding in dead mulch (DM) and living mulch (LM), and conventional plough-based tillage (CT) practices on a fluvisol in New Caledonia (French Archipelago in the South Pacific). In 2018, soil quality of the cropping systems were evaluated with the Biofunctool® set of indicators, that consists in twelve integrative, in-field, and low-tech indicators assessing the biological, physical and chemical properties of soils. Main soil functions were evaluated including (i) carbon transformation, (ii) structure maintenance, and (iii) nutrient cycling in the ten first soil centimeters. The results showed significant higher score for soil structure maintenance (e.g., aggregate stability, water infiltration) and carbon transformation function (e.g., soil respiration, labile carbon) under CA in DM and LM when compared with CT. Score of carbon transformation index was higher in DM compared with LM. However, no significant effect of cropping systems was observed on nutrient cycling (i.e., nitrogen and phosphorus). In conclusion, the aggregated synthetic scores of soil multi-functions evaluated with Biofunctool® demonstrate that CA cropping systems lead to a better soil functioning. Further analysis of the results with agronomic performance of the soil-crop systems would allow to better understand the links between soil functioning and production ES of CA.Keywords: conservation agriculture, cropping systems, ecosystem services, soil functions
Procedia PDF Downloads 156107 Delineation of Green Infrastructure Buffer Areas with a Simulated Annealing: Consideration of Ecosystem Services Trade-Offs in the Objective Function
Authors: Andres Manuel Garcia Lamparte, Rocio Losada Iglesias, Marcos BoullóN Magan, David Miranda Barros
Abstract:
The biodiversity strategy of the European Union for 2030, mentions climate change as one of the key factors for biodiversity loss and considers green infrastructure as one of the solutions to this problem. In this line, the European Commission has developed a green infrastructure strategy which commits members states to consider green infrastructure in their territorial planning. This green infrastructure is aimed at granting the provision of a wide number of ecosystem services to support biodiversity and human well-being by countering the effects of climate change. Yet, there are not too many tools available to delimit green infrastructure. The available ones consider the potential of the territory to provide ecosystem services. However, these methods usually aggregate several maps of ecosystem services potential without considering possible trade-offs. This can lead to excluding areas with a high potential for providing ecosystem services which have many trade-offs with other ecosystem services. In order to tackle this problem, a methodology is proposed to consider ecosystem services trade-offs in the objective function of a simulated annealing algorithm aimed at delimiting green infrastructure multifunctional buffer areas. To this end, the provision potential maps of the regulating ecosystem services considered to delimit the multifunctional buffer areas are clustered in groups, so that ecosystem services that create trade-offs are excluded in each group. The normalized provision potential maps of the ecosystem services in each group are added to obtain a potential map per group which is normalized again. Then the potential maps for each group are combined in a raster map that shows the highest provision potential value in each cell. The combined map is then used in the objective function of the simulated annealing algorithm. The algorithm is run both using the proposed methodology and considering the ecosystem services individually. The results are analyzed with spatial statistics and landscape metrics to check the number of ecosystem services that the delimited areas produce, as well as their regularity and compactness. It has been observed that the proposed methodology increases the number of ecosystem services produced by delimited areas, improving their multifunctionality and increasing their effectiveness in preventing climate change impacts.Keywords: ecosystem services trade-offs, green infrastructure delineation, multifunctional buffer areas, climate change
Procedia PDF Downloads 174106 Municipal Asset Management Planning 2.0 – A New Framework For Policy And Program Design In Ontario
Authors: Scott R. Butler
Abstract:
Ontario, Canada’s largest province, is in the midst of an interesting experiment in mandated asset management planning for local governments. At the beginning of 2021, Ontario’s 444 municipalities were responsible for the management of 302,864 lane kilometers of roads that have a replacement cost of $97.545 billion CDN. Roadways are by far the most complex, expensive, and extensive assets that a municipality is responsible for overseeing. Since adopting Ontario Regulation 588/47: Asset Management Planning for Municipal Infrastructure in 2017, the provincial government has established prescriptions for local road authorities regarding asset category and levels of service being provided. This provincial regulation further stipulates that asset data such as extent, condition, and life cycle costing are to be captured in manner compliant with qualitative descriptions and technical metrics. The Ontario Good Roads Association undertook an exercise to aggregate the road-related data contained within the 444 asset management plans that municipalities have filed with the provincial government. This analysis concluded that collectively Ontario municipal roadways have a $34.7 billion CDN in deferred maintenance. The ill-state of repair of Ontario municipal roads has lasting implications for province’s economic competitiveness and has garnered considerable political attention. Municipal efforts to address the maintenance backlog are stymied by the extremely limited fiscal parameters municipalities must operate within in Ontario. Further exacerbating the program are provincially designed programs that are ineffective, administratively burdensome, and not necessarily aligned with local priorities or strategies. This paper addresses how municipal asset management plans – and more specifically, the data contained in these plans – can be used to design innovative policy frameworks, flexible funding programs, and new levels of service that respond to these funding challenges, as well as emerging issues such as local economic development and climate change. To fully unlock the potential that Ontario Regulation 588/17 has imposed will require a resolute commitment to data standardization and horizontal collaboration between municipalities within regions.Keywords: transportation, municipal asset management, subnational policy design, subnational funding program design
Procedia PDF Downloads 94105 Assessment the Implications of Regional Transport and Local Emission Sources for Mitigating Particulate Matter in Thailand
Authors: Ruchirek Ratchaburi, W. Kevin. Hicks, Christopher S. Malley, Lisa D. Emberson
Abstract:
Air pollution problems in Thailand have improved over the last few decades, but in some areas, concentrations of coarse particulate matter (PM₁₀) are above health and regulatory guidelines. It is, therefore, useful to investigate how PM₁₀ varies across Thailand, what conditions cause this variation, and how could PM₁₀ concentrations be reduced. This research uses data collected by the Thailand Pollution Control Department (PCD) from 17 monitoring sites, located across 12 provinces, and obtained between 2011 and 2015 to assess PM₁₀ concentrations and the conditions that lead to different levels of pollution. This is achieved through exploration of air mass pathways using trajectory analysis, used in conjunction with the monitoring data, to understand the contribution of different months, an hour of the day and source regions to annual PM₁₀ concentrations in Thailand. A focus is placed on locations that exceed the national standard for the protection of human health. The analysis shows how this approach can be used to explore the influence of biomass burning on annual average PM₁₀ concentration and the difference in air pollution conditions between Northern and Southern Thailand. The results demonstrate the substantial contribution that open biomass burning from agriculture and forest fires in Thailand and neighboring countries make annual average PM₁₀ concentrations. The analysis of PM₁₀ measurements at monitoring sites in Northern Thailand show that in general, high concentrations tend to occur in March and that these particularly high monthly concentrations make a substantial contribution to the overall annual average concentration. In 2011, a > 75% reduction in the extent of biomass burning in Northern Thailand and in neighboring countries resulted in a substantial reduction not only in the magnitude and frequency of peak PM₁₀ concentrations but also in annual average PM₁₀ concentrations at sites across Northern Thailand. In Southern Thailand, the annual average PM₁₀ concentrations for individual years between 2011 and 2015 did not exceed the human health standard at any site. The highest peak concentrations in Southern Thailand were much lower than for Northern Thailand for all sites. The peak concentrations at sites in Southern Thailand generally occurred between June and October and were associated with air mass back trajectories that spent a substantial proportion of time over the sea, Indonesia, Malaysia, and Thailand prior to arrival at the monitoring sites. The results show that emissions reductions from biomass burning and forest fires require action on national and international scales, in both Thailand and neighboring countries, such action could contribute to ensuring compliance with Thailand air quality standards.Keywords: annual average concentration, long-range transport, open biomass burning, particulate matter
Procedia PDF Downloads 182104 Measuring Corruption from Public Justifications: Insights from the Brazilian Anti-Corruption Agency
Authors: Ana Luiza Aranha
Abstract:
This paper contributes to the discussions that consider corruption as a challenge to the establishment of more democratically inclusive societies in Latin America. The paper advocates an intrinsic connection between democratic principles and corruption control – it is only possible to achieve just forms of democratic life if accountability institutions are able to control corruption, and therefore control the political exclusions that it brings. Departing from a non-trivial approach to corruption, and recognizing a gap in democratic theory when thinking about this phenomenon, corruption is understood as the breakdown of the democratic inclusive rule, whereby political decisions are made (and actions were taken) in spite of those potentially affected by them. Based on this idea, this paper proposes a new way of measuring corruption, moving away from usual aggregate measures – such as the Corruption Perception Index – and case studies of corruption scandals. The main argument sustains that corruption is intrinsically connected with the ability to be accountable and to provide public justification for the political conduct. The point advocated is that corruption involves a dimension of political exclusion. It generates a private benefit which is, from a democratic point of view, illegitimate, since it benefits some at the expense of the decisions made by the political community. Corruption is then a form of exclusion based on deception and opacity - for corruption, there is no plausible justification. Empirically, the paper uses the audit reports produced by the Brazilian anti-corruption agency (the CGU - Office of the Comptroller General) in its Inspections From Public Lotteries Program to exemplify how we can use this definition to separate corruption cases from mismanagement irregularities. On one side, there is poor management and inefficiencies, and, on the other, corruption, defined by the implausibility of public justifications – because the public officials would have to publicize illegitimate privileges and undue advantages. CGU reports provide the justifications given by the public officials for the irregularities found and also the acceptance or not by the control agency of these justifications. The analysis of this dialogue – between public officials and control agents – makes it possible to divide the irregularities on those that can be publicly justified versus those that cannot. In order to hold public officials accountable for their actions, making them responsible for the exclusions that they may cause (such as corruption), the accountability institutions fulfil an important role in reinforcing and empowering democracy and its basic inclusive condition.Keywords: accountability, brazil, corruption, democracy
Procedia PDF Downloads 259103 The Effects of Stoke's Drag, Electrostatic Force and Charge on Penetration of Nanoparticles through N95 Respirators
Authors: Jacob Schwartz, Maxim Durach, Aniruddha Mitra, Abbas Rashidi, Glen Sage, Atin Adhikari
Abstract:
NIOSH (National Institute for Occupational Safety and Health) approved N95 respirators are commonly used by workers in construction sites where there is a large amount of dust being produced from sawing, grinding, blasting, welding, etc., both electrostatically charged and not. A significant portion of airborne particles in construction sites could be nanoparticles created beside coarse particles. The penetration of the particles through the masks may differ depending on the size and charge of the individual particle. In field experiments relevant to this current study, we found that nanoparticles of medium size ranges are penetrating more frequently than nanoparticles of smaller and larger sizes. For example, penetration percentages of nanoparticles of 11.5 – 27.4 nm into a sealed N95 respirator on a manikin head ranged from 0.59 to 6.59%, whereas nanoparticles of 36.5 – 86.6 nm ranged from 7.34 to 16.04%. The possible causes behind this increased penetration of mid-size nanoparticles through mask filters are not yet explored. The objective of this study is to identify causes behind this unusual behavior of mid-size nanoparticles. We have considered such physical factors as Boltzmann distribution of the particles in thermal equilibrium with the air, kinetic energy of the particles at impact on the mask, Stoke’s drag force, and electrostatic forces in the mask stopping the particles. When the particles collide with the mask, only the particles that have enough kinetic energy to overcome the energy loss due to the electrostatic forces and the Stokes’ drag in the mask can pass through the mask. To understand this process, the following assumptions were made: (1) the effect of Stoke’s drag depends on the particles’ velocity at entry into the mask; (2) the electrostatic force is proportional to the charge on the particles, which in turn is proportional to the surface area of the particles; (3) the general dependence on electrostatic charge and thickness means that for stronger electrostatic resistance in the masks and thicker the masks’ fiber layers the penetration of particles is reduced, which is a sensible conclusion. In sampling situations where one mask was soaked in alcohol eliminating electrostatic interaction the penetration was much larger in the mid-range than the same mask with electrostatic interaction. The smaller nanoparticles showed almost zero penetration most likely because of the small kinetic energy, while the larger sized nanoparticles showed almost negligible penetration most likely due to the interaction of the particle with its own drag force. If there is no electrostatic force the fraction for larger particles grows. But if the electrostatic force is added the fraction for larger particles goes down, so diminished penetration for larger particles should be due to increased electrostatic repulsion, may be due to increased surface area and therefore larger charge on average. We have also explored the effect of ambient temperature on nanoparticle penetrations and determined that the dependence of the penetration of particles on the temperature is weak in the range of temperatures in the measurements 37-42°C, since the factor changes in the range from 3.17 10-3K-1 to 3.22 10-3K-1.Keywords: respiratory protection, industrial hygiene, aerosol, electrostatic force
Procedia PDF Downloads 194102 Identification of Phenolic Compounds and Study the Antimicrobial Property of Eleaocarpus Ganitrus Fruits
Authors: Velvizhi Dharmalingam, Rajalaksmi Ramalingam, Rekha Prabhu, Ilavarasan Raju
Abstract:
Background: The use of herbal products for various therapeutic regimens has increased tremendously in the developing countries. Elaeocarpus ganitrus(Rudraksha) is a broad-leaved tree, belonging to the family Elaeocarpaceae found in tropical and subtropical areas. It is popular in an indigenous system of medicine like Ayurveda, Siddha, and Unani. According to Ayurvedic medicine, Rudraksha is used in the managing of blood pressure, asthma, mental disorders, diabetes, gynaecological disorders, neurological disorders such as epilepsy and liver diseases. Objectives: The present study aimed to study the physicochemical parameters of Elaeocarpus ganitrus(fruits) and identify the phenolic compounds (gallic acid, ellagic acid, and chebulinic acid). To estimate the microbial load and the antibacterial activity of extract of Elaeocarpus ganitrus for selective pathogens. Methodology: The dried powdered fruit of Elaeocarpus ganitrus was performed the physicochemical parameters (such as Loss on drying, Alcohol soluble extractive, Water soluble extractive, Total ash and Acid insoluble ash) and pH was measured. The dried coarse powdered fruit of Elaeocarpus ganitrus was extracted successively with hexane, chloroform, ethylacetate and aqueous alcohol by cold percolation method. Identification of phenolic compounds (gallic acid, ellagic acid, chebulinic acid) was done by HPTLC method and confirmed by co-TLC using different solvent system.The successive extracts of Elaeocarpus ganitrus and standards (like gallic acid, ellagic acid, and chebulinic acid) was approximately weighed and made up with alcohol. HPTLC (CAMAG) analysis was performed on a TLC over silica gel 60F254 precoated aluminium plate, layer thickness 0.2 mm (E.Merck, Germany) by using ATS4, Visualizer and Scanner with wavelength at 254 nm, 366 nm and derivatized with different reagents. The microbial load such as total bacterial count, total fungal count, Enterobacteria, Escherichia coli, Salmonella species, Staphylococcus aureus and Pseudomonas aeruginosa by serial dilution method and antibacterial activity of was measured by Kirby bauer method for selective pathogens. Results: The physicochemical parameter of Elaeocarpus ganitrus was studied for standardization of crude drug. Among all the successive extracts were identified with phenolic compounds and Elaeocarpus ganitrus extract having potent antibacterial activity against gram-positive and gram-negative bacteria.Keywords: antimicrobial activity, Elaeocarpus ganitrus, HPTLC, phenolic compounds
Procedia PDF Downloads 342101 Object-Scene: Deep Convolutional Representation for Scene Classification
Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang
Abstract:
Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization
Procedia PDF Downloads 331100 A Re-Evaluation of Green Architecture and Its Contributions to Environmental Sustainability
Authors: Po-Ching Wang
Abstract:
Considering the notable effects of natural resource consumption and impacts on fragile ecosystems, reflection on contemporary sustainable design is critical. Nevertheless, the idea of ‘green’ has been misapplied and even abused, and, in fact, much damage to the environment has been done in its name. In 1996’s popular science fiction film Independence Day, an alien species, having exhausted the natural resources of one planet, moves on to another —a fairly obvious irony on contemporary human beings’ irresponsible use of the Earth’s natural resources in modern times. In fact, the human ambition to master nature and freely access the world’s resources has long been inherent in manifestos evinced by productions of the environmental design professions. Ron Herron’s Walking City, an experimental architectural piece of 1964, is one example that comes to mind here. For this design concept, the architect imagined a gigantic nomadic urban aggregate that by way of an insect-like robotic carrier would move all over the world, on land and sea, to wherever its inhabitants want. Given the contemporary crisis regarding natural resources, recently ideas pertinent to structuring a sustainable environment have been attracting much interest in architecture, a field that has been accused of significantly contributing to ecosystem degradation. Great art, such as Fallingwater building, has been regarded as nature-friendly, but its notion of ‘green’ might be inadequate in the face of the resource demands made by human populations today. This research suggests a more conservative and scrupulous attitude to attempting to modify nature for architectural settings. Designs that pursue spiritual or metaphysical interconnections through anthropocentric aesthetics are not sufficient to benefit ecosystem integrity; though high-tech energy-saving processes may contribute to a fine-scale sustainability, they may ultimately cause catastrophe in the global scale. Design with frugality is proposed in order to actively reduce environmental load. The aesthetic taste and ecological sensibility of design professions and the public alike may have to be reshaped in order to make the goals of environmental sustainability viable.Keywords: anthropocentric aesthetic, aquarium sustainability, biosphere 2, ecological aesthetic, ecological footprint, frugal design
Procedia PDF Downloads 20999 Impact of Climate Variability on Household's Crop Income in Central Highlands and Arssi Grain Plough Areas of Ethiopia
Authors: Arega Shumetie Ademe, Belay Kassa, Degye Goshu, Majaliwa Mwanjalolo
Abstract:
Currently the world economy is suffering from one critical problem, climate change. Some studies done before identified that impact of the problem is region specific means in some part of the world (temperate zone) there is improvement in agricultural performance but in some others like in the tropics there is drastic reduction in crop production and crop income. Climate variability is becoming dominant cause of short-term fluctuation in rain-fed agricultural production and income of developing countries. The purely rain-fed Ethiopian agriculture is the most vulnerable sector to the risks and impacts of climate variability. Thus, this study tried to identify impact of climate variability on crop income of smallholders in Ethiopia. The research used eight rounded unbalanced panel data from 1994- 2014 collected from six villages in the study area. After having all diagnostic tests the research used fixed effect method of regression. Based on the regression result rainfall and temperature deviation from their respective long term averages have negative and significant effect on crop income. Other extreme devastating shocks like flood, storm and frost, which are sourced from climate variability, have significant and negative effect on crop income of households’. Parameters that notify rainfall inconsistency like late start, variation in availability at growing season, and early cessation are critical problems for crop income of smallholder households as to the model result. Given this, impact of climate variability is not consistent in different agro-ecologies of the country. Rainfall variability has similar impact on crop income in different agro-ecology, but variation in temperature affects cold agro-ecology villages negatively and significantly, while it has positive effect in warm villages. Parameters that represent rainfall inconsistency have similar impact in both agro-ecologies and the aggregate model regression. This implies climate variability sourced from rainfall inconsistency is the main problem of Ethiopian agriculture especially the crop production sub-sector of smallholder households.Keywords: climate variability, crop income, household, rainfall, temperature
Procedia PDF Downloads 37698 Fabrication of Aluminum Nitride Thick Layers by Modified Reactive Plasma Spraying
Authors: Cécile Dufloux, Klaus Böttcher, Heike Oppermann, Jürgen Wollweber
Abstract:
Hexagonal aluminum nitride (AlN) is a promising candidate for several wide band gap semiconductor compound applications such as deep UV light emitting diodes (UVC LED) and fast power transistors (HEMTs). To date, bulk AlN single crystals are still commonly grown from the physical vapor transport (PVT). Single crystalline AlN wafers obtained from this process could offer suitable substrates for a defect-free growth of ultimately active AlGaN layers, however, these wafers still lack from small sizes, limited delivery quantities and high prices so far.Although there is already an increasing interest in the commercial availability of AlN wafers, comparatively cheap Si, SiC or sapphire are still predominantly used as substrate material for the deposition of active AlGaN layers. Nevertheless, due to a lattice mismatch up to 20%, the obtained material shows high defect densities and is, therefore, less suitable for high power devices as described above. Therefore, the use of AlN with specially adapted properties for optical and sensor applications could be promising for mass market products which seem to fulfill fewer requirements. To respond to the demand of suitable AlN target material for the growth of AlGaN layers, we have designed an innovative technology based on reactive plasma spraying. The goal is to produce coarse grained AlN boules with N-terminated columnar structure and high purity. In this process, aluminum is injected into a microwave stimulated nitrogen plasma. AlN, as the product of the reaction between aluminum powder and the plasma activated N2, is deposited onto the target. We used an aluminum filament as the initial material to minimize oxygen contamination during the process. The material was guided through the nitrogen plasma so that the mass turnover was 10g/h. To avoid any impurity contamination by an erosion of the electrodes, an electrode-less discharge was used for the plasma ignition. The pressure was maintained at 600-700 mbar, so the plasma reached a temperature high enough to vaporize the aluminum which subsequently was reacting with the surrounding plasma. The obtained products consist of thick polycrystalline AlN layers with a diameter of 2-3 cm. The crystallinity was determined by X-ray crystallography. The grain structure was systematically investigated by optical and scanning electron microscopy. Furthermore, we performed a Raman spectroscopy to provide evidence of stress in the layers. This paper will discuss the effects of process parameters such as microwave power and deposition geometry (specimen holder, radiation shields, ...) on the topography, crystallinity, and stress distribution of AlN.Keywords: aluminum nitride, polycrystal, reactive plasma spraying, semiconductor
Procedia PDF Downloads 28197 Analysis of the Introduction of Carsharing in the Context of Developing Countries: A Case Study Based on On-Board Carsharing Survey in Kabul, Afghanistan
Authors: Mustafa Rezazada, Takuya Maruyama
Abstract:
Cars have a strong integration with the human being since its introduction, and this interaction is more evident in the urban context. Therefore, shifting city residents from driving private vehicles to public transits has been a big challenge. Accordingly, carsharing as an innovative, environmentally friendly transport alternative had a significant contribution to this transition so far. It helped to reduce the numbers of household car ownership, declining demand for on-street parking, dropping the numbers of kilometers traveled by car, and affects the future of mobility by decreasing the Green House Gases (GHS) emissions’ and the numbers of new cars to be purchased otherwise. However, majorities of carsharing researches were conducted in highly developed cities, and less attention has been paid to the cities of developing countries. This study is conducted in the Capital of Afghanistan, Kabul to investigate the current transport pattern, user behavior, and to examine the possibility of introducing the carsharing system. This study established a new survey method called Onboard Carsharing Survey OCS. In this survey, the carpooling passengers aboard are interviewed following the Onboard Transit Survey OTS guideline with a few refinements. The survey focuses on respondents’ daily travel behavior and hypothetical stated choice of carsharing opportunities. Moreover, it followed by an aggregate analysis at the end. The survey results indicate the following: two-thirds of the respondents 62% have been carpooling every day since 5 years or more, more than half of the respondents are not satisfied with current modes, besides other attributes the Traffic Congestion, Environment and Insufficient Public Transport were ranked the most critical in daily transportation by survey participants. Moreover, 68.24% of the respondent chose Carsharing over carpooling under different choice game scenarios. Overall, the findings in this research show that Kabul City is a potential underground for the introduction of Carsharing in the future. Taken together, insufficient public transit, dissatisfaction with current modes, and their stated interest will affect the future of carsharing positively in Kabul City. The modal choice in this study is limited to carpooling and carsharing; more choice sets, including bus, cycling, and walking, will have to be added to evaluate further.Keywords: carsharing, developing countries, Kabul Afghanistan, onboard carsharing survey, transportation, urban planning
Procedia PDF Downloads 13596 The Effects of Cultural Distance and Institutions on Foreign Direct Investment Choices: Evidence from Turkey and China
Authors: Nihal Kartaltepe Behram, Göksel Ataman, Dila Okçu
Abstract:
With the development of foreign direct investments, the social, cultural, political and economic interactions between countries and institutions have become visible and they have become determining factors for the strategic structuring and market goals. In this context the purpose of this study is to investigate the effects of cultural distance and institutions on foreign direct investment choices in terms of location and investment model. For international establishments, the concept of culture, as well as the concept of cultural distance, is taken specifically into consideration, especially in the selection of methods for entering the market. In the researches and empirical studies conducted, a direct relationship between cultural distance and foreign direct investments is set and institutions and effective variable factors are examined at the level of defining the investment types. When the detailed calculation strategies and empirical researches and studies are taken into consideration, the most common methods for determining the direct investment model, considering the cultural distances, are full-ownership enterprises and joint ventures. Also, when all of the factors affecting the investments are taken into consideration, it was seen that the effect of institutions such as Government Intervention, Intellectual Property Rights, Corruption and Contract Enforcements is very important. Furthermore agglomeration is more intense and effective on the investment, compared to other factors. China has been selected as the target country, due to its effectiveness in world economy and its contributions to developing countries, which has commercial relationships with. Qualitative research methods are used for this study conducted, to measure the effects of determinative variable factors in the hypotheses of study, on the direct foreign investors and to evaluate the findings. In this study in-depth interview is used as a data collection method and the data analysis is made through descriptive analysis. Foreign Direct Investments are so reactive to institutions and cultural distance is identified by all interviews and analysis. On the other hand, agglomeration is the most strong determiner factor on foreign direct investors in Chinese Market. The reason of this factors, which comprise the sectorial aggregate, are not the strongest factors as agglomeration that the most important finding. We expect that this study became a beneficial guideline for developed and developing countries and local and national institutions’ strategic plans.Keywords: China, cultural distance, Foreign Direct Investments, institutions
Procedia PDF Downloads 41895 An Investigation on MgAl₂O₄ Based Mould System in Investment Casting Titanium Alloy
Authors: Chen Yuan, Nick Green, Stuart Blackburn
Abstract:
The investment casting process offers a great freedom of design combined with the economic advantage of near net shape manufacturing. It is widely used for the production of high value precision cast parts in particularly in the aerospace sector. Various combinations of materials have been used to produce the ceramic moulds, but most investment foundries use a silica based binder system in conjunction with fused silica, zircon, and alumino-silicate refractories as both filler and coarse stucco materials. However, in the context of advancing alloy technologies, silica based systems are struggling to keep pace, especially when net-shape casting titanium alloys. Study has shown that the casting of titanium based alloys presents considerable problems, including the extensive interactions between the metal and refractory, and the majority of metal-mould interaction is due to reduction of silica, present as binder and filler phases, by titanium in the molten state. Cleaner, more refractory systems are being devised to accommodate these changes. Although yttria has excellent chemical inertness to titanium alloy, it is not very practical in a production environment combining high material cost, short slurry life, and poor sintering properties. There needs to be a cost effective solution to these issues. With limited options for using pure oxides, in this work, a silica-free magnesia spinel MgAl₂O₄ was used as a primary coat filler and alumina as a binder material to produce facecoat in the investment casting mould. A comparison system was also studied with a fraction of the rare earth oxide Y₂O₃ adding into the filler to increase the inertness. The stability of the MgAl₂O₄/Al₂O₃ and MgAl₂O₄/Y₂O₃/Al₂O₃ slurries was assessed by tests, including pH, viscosity, zeta-potential and plate weight measurement, and mould properties such as friability were also measured. The interaction between the face coat and titanium alloy was studied by both a flash re-melting technique and a centrifugal investment casting method. The interaction products between metal and mould were characterized using x-ray diffraction (XRD), scanning electron microscopy (SEM) and Energy Dispersive X-Ray Spectroscopy (EDS). The depth of the oxygen hardened layer was evaluated by micro hardness measurement. Results reveal that introducing a fraction of Y₂O₃ into magnesia spinel can significantly increase the slurry life and reduce the thickness of hardened layer during centrifugal casting.Keywords: titanium alloy, mould, MgAl₂O₄, Y₂O₃, interaction, investment casting
Procedia PDF Downloads 113