Search results for: observation scale
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7462

Search results for: observation scale

3682 Impact of Marketing Orientation on Environment and Firm’s Performance

Authors: Sabita Mahapatra

Abstract:

‘Going green’ has been an emerging issue worldwide driving companies to continuously enhance their green capabilities and implement innovative green practices to protect the environment and improve business performance. Green has become a contemporary business environmental issue. The resource advantage theory is adopted in the present study to observe the impact of marketing orientation and green innovation practices on environmental and firm’s performance. The small and medium firms compared to large firms have different approach towards market orientation as a strategic tool. The present study proposes a conceptual framework regarding the impact of market orientation on environmental and firm’s performance through green innovation practices in the context of small and medium scale industries (SMEs). The propositions developed in the present paper would provide scope for future research study to validate the conceptual framework in the emerging economy like India.

Keywords: market orientation, green innovation practices, environment performance, corporate performance, emerging market

Procedia PDF Downloads 307
3681 Artificial Neural Network Approach for Vessel Detection Using Visible Infrared Imaging Radiometer Suite Day/Night Band

Authors: Takashi Yamaguchi, Ichio Asanuma, Jong G. Park, Kenneth J. Mackin, John Mittleman

Abstract:

In this paper, vessel detection using the artificial neural network is proposed in order to automatically construct the vessel detection model from the satellite imagery of day/night band (DNB) in visible infrared in the products of Imaging Radiometer Suite (VIIRS) on Suomi National Polar-orbiting Partnership (Suomi-NPP).The goal of our research is the establishment of vessel detection method using the satellite imagery of DNB in order to monitor the change of vessel activity over the wide region. The temporal vessel monitoring is very important to detect the events and understand the circumstances within the maritime environment. For the vessel locating and detection techniques, Automatic Identification System (AIS) and remote sensing using Synthetic aperture radar (SAR) imagery have been researched. However, each data has some lack of information due to uncertain operation or limitation of continuous observation. Therefore, the fusion of effective data and methods is important to monitor the maritime environment for the future. DNB is one of the effective data to detect the small vessels such as fishery ships that is difficult to observe in AIS. DNB is the satellite sensor data of VIIRS on Suomi-NPP. In contrast to SAR images, DNB images are moderate resolution and gave influence to the cloud but can observe the same regions in each day. DNB sensor can observe the lights produced from various artifact such as vehicles and buildings in the night and can detect the small vessels from the fishing light on the open water. However, the modeling of vessel detection using DNB is very difficult since complex atmosphere and lunar condition should be considered due to the strong influence of lunar reflection from cloud on DNB. Therefore, artificial neural network was applied to learn the vessel detection model. For the feature of vessel detection, Brightness Temperature at the 3.7 μm (BT3.7) was additionally used because BT3.7 can be used for the parameter of atmospheric conditions.

Keywords: artificial neural network, day/night band, remote sensing, Suomi National Polar-orbiting Partnership, vessel detection, Visible Infrared Imaging Radiometer Suite

Procedia PDF Downloads 226
3680 Comparing Abused and Normal Male Students in Tehran Guidance Schools: Emphasizing the Co-Dependency of Their Mothers

Authors: Mohamad Saleh Sangin Ostadi, Esmail Safari, Somayeh Akbari, Kaveh Qaderi Bagajan

Abstract:

The aim of this study is to compare abused and normal male students in Tehran guidance schools with emphasis on the co-dependency of their mothers. The method of this study is based on survey method and comparison (Ex-Post Facto). The method of sampling is also multi-stage cluster. Accordingly, we did sampling from secondary schools of education and training in Tehran, including 12 schools with levels of first, second and third. Each of the schools represents the three – high, medium and low- economic and social conditions. In the following, three classes from every school and 20 students from each class were randomly selected. By (CTQ) abused and normal students were separated that 670 children were recognized as normal and 50 children as abused. Then, 50 children were randomly selected from normal group and compared with abused group. Using Spanned-Fischer Co-dependency Scale, we compared mothers of abused and normal students. The results showed that mothers of the abused children have higher co- dependency average comparing to the mothers of the normal children.

Keywords: co-dependency, child abuse, abused children, parental psychological health

Procedia PDF Downloads 328
3679 Recommendations for Data Quality Filtering of Opportunistic Species Occurrence Data

Authors: Camille Van Eupen, Dirk Maes, Marc Herremans, Kristijn R. R. Swinnen, Ben Somers, Stijn Luca

Abstract:

In ecology, species distribution models are commonly implemented to study species-environment relationships. These models increasingly rely on opportunistic citizen science data when high-quality species records collected through standardized recording protocols are unavailable. While these opportunistic data are abundant, uncertainty is usually high, e.g., due to observer effects or a lack of metadata. Data quality filtering is often used to reduce these types of uncertainty in an attempt to increase the value of studies relying on opportunistic data. However, filtering should not be performed blindly. In this study, recommendations are built for data quality filtering of opportunistic species occurrence data that are used as input for species distribution models. Using an extensive database of 5.7 million citizen science records from 255 species in Flanders, the impact on model performance was quantified by applying three data quality filters, and these results were linked to species traits. More specifically, presence records were filtered based on record attributes that provide information on the observation process or post-entry data validation, and changes in the area under the receiver operating characteristic (AUC), sensitivity, and specificity were analyzed using the Maxent algorithm with and without filtering. Controlling for sample size enabled us to study the combined impact of data quality filtering, i.e., the simultaneous impact of an increase in data quality and a decrease in sample size. Further, the variation among species in their response to data quality filtering was explored by clustering species based on four traits often related to data quality: commonness, popularity, difficulty, and body size. Findings show that model performance is affected by i) the quality of the filtered data, ii) the proportional reduction in sample size caused by filtering and the remaining absolute sample size, and iii) a species ‘quality profile’, resulting from a species classification based on the four traits related to data quality. The findings resulted in recommendations on when and how to filter volunteer generated and opportunistically collected data. This study confirms that correctly processed citizen science data can make a valuable contribution to ecological research and species conservation.

Keywords: citizen science, data quality filtering, species distribution models, trait profiles

Procedia PDF Downloads 184
3678 Internal and External Overpressure Calculation for Vented Gas Explosion by Using a Combined Computational Fluid Dynamics Approach

Authors: Jingde Li, Hong Hao

Abstract:

Recent oil and gas accidents have reminded us the severe consequences of gas explosion on structure damage and financial loss. In order to protect the structures and personnel, engineers and researchers have been working on numerous different explosion mitigation methods. Amongst, venting is the most economical approach to mitigate gas explosion overpressure. In this paper, venting is used as the overpressure alleviation method. A theoretical method and a numerical technique are presented to predict the internal and external pressure from vented gas explosion in a large enclosure. Under idealized conditions, a number of experiments are used to calibrate the accuracy of the theoretically calculated data. A good agreement between the theoretical results and experimental data is seen. However, for realistic scenarios, the theoretical method over-estimates internal pressures and is incapable of predicting external pressures. Therefore, a CFD simulation procedure is proposed in this study to estimate both the internal and external overpressure from a large-scale vented explosion. Satisfactory agreement between CFD simulation results and experimental data is achieved.

Keywords: vented gas explosion, internal pressure, external pressure, CFD simulation, FLACS, ANSYS Fluent

Procedia PDF Downloads 148
3677 2D CFD-PBM Coupled Model of Particle Growth in an Industrial Gas Phase Fluidized Bed Polymerization Reactor

Authors: H. Kazemi Esfeh, V. Akbari, M. Ehdaei, T. N. G. Borhani, A. Shamiri, M. Najafi

Abstract:

In an industrial fluidized bed polymerization reactor, particle size distribution (PSD) plays a significant role in the reactor efficiency evaluation. The computational fluid dynamic (CFD) models coupled with population balance equation (CFD-PBM) have been extensively employed to investigate the flow behavior in the poly-disperse multiphase fluidized bed reactors (FBRs) utilizing ANSYS Fluent code. In this study, an existing CFD-PBM/ DQMOM coupled modeling framework has been used to highlight its potential to analyze the industrial-scale gas phase polymerization reactor. The predicted results reveal an acceptable agreement with the observed industrial data in terms of pressure drop and bed height. The simulated results also indicate that the higher particle growth rate can be achieved for bigger particles. Hence, the 2D CFD-PBM/DQMOM coupled model can be used as a reliable tool for analyzing and improving the design and operation of the gas phase polymerization FBRs.

Keywords: computational fluid dynamics, population balance equation, fluidized bed polymerization reactor, direct quadrature method of moments

Procedia PDF Downloads 354
3676 Social Network Analysis in Water Governance

Authors: Faribaebrahimi, Mehdi Ghorbani, Mohsen Mohsenisaravi

Abstract:

Ecosystem management is complex because of natural and human issues. To cope with this complexity water governance is recommended since it involves all stakeholders including people, governmental and non-governmental organization who related to environmental systems. Water governance emphasizes on water co-management through consideration of all the stakeholders in the form of social and organizational network. In this research, to illustrate indicators of water governance in Dorood watershed, in Shemiranat region of Iran, social network analysis had been applied. The results revealed that social cohesion among pastoralists in Dorood is medium because of trust links, while link sustainability is weak to medium. According to the results, some pastoralists have high social power and therefore are key actors in the utilization network, regarding to centrality index and trust links. The results also demonstrated that Agricultural Development Office and (Shemshak-Darbandsar Islamic) Council are key actors in rangeland co-management, based on centrality index in rangeland institutional network at regional scale in Shemiranat district.

Keywords: social network analysis, water governance, organizational network, water co-management

Procedia PDF Downloads 336
3675 Development of Column-Filters of Sulfur Limonene Polysulfide to Mercury Removal from Contaminated Effluents

Authors: Galo D. Soria, Jenny S. Casame, Eddy F. Pazmino

Abstract:

In Ecuador, mining operations have significantly impacted water sources. Artisanal mining extensively relies in mercury amalgamation. Mercury is a neurotoxic substance even at low concentrations. The objective of this investigation is to exploit Hg-removal capacity of sulfur-limonene polysulfide (SLP), which is a low-cost polymer, in order to prepare granular media (sand) coated with SLP to be used in laboratory scale column-filtration systems. Preliminary results achieved 85% removal of Hg⁺⁺ from synthetic effluents using 20-cm length and 5-cm diameter columns at 119m/day average pore water velocity. During elution of the column, the SLP-coated sand indicated that Hg⁺⁺ is permanently fixed to the collector surface, in contrast, uncoated sand showed reversible retention in Hg⁺⁺ in the solid phase. Injection of 50 pore volumes decreased Hg⁺⁺ removal to 46%. Ongoing work has been focused in optimizing the synthesis of SLP and the polymer content in the porous media coating process to improve Hg⁺⁺ removal and extend the lifetime of the column-filter.

Keywords: column-filter, mercury, mining, polysulfide, water treatment

Procedia PDF Downloads 131
3674 Modified Surface Morphology, Structure and Enhanced Weathering Performance of Polyester-Urethane/Organoclay Nanocomposite Coatings

Authors: Gaurav Verma

Abstract:

Organoclay loaded (0-5 weight %) polyester-urethane (PU) coatings were prepared with a branched hydroxyl-bearing polyester and an aliphatic poly-isocyanate. TEM micrographs show partial exfoliation and intercalation of clay platelets in organoclay-polyester dispersions. AFM surface images reveals that the PU hard domains tend to regularise and also self-organise into spherical shapes of sizes 50 nm (0 wt %), 60 nm (2 wt %) and 190 nm (4 wt %) respectively. IR analysis shows that PU chains have increasing tendency to interact with exfoliated clay platelets through hydrogen bonding. This interaction strengthens inter-chain linkages in PU matrix and hence improves anti-ageing properties. 1000 hours of accelerated weathering was evaluated by ATR spectroscopy, while yellowing and overall discoloration was quantified by the Δb* and ΔE* values of the CIELab colour scale. Post-weathering surface properties also showed improvement as the loss of thickness and reduction in gloss in neat PU was 25% and 42%; while it was just 3.5% and 14% respectively for the 2 wt% nanocomposite coating. This work highlights the importance of modifying surface and bulk properties of PU coatings at nanoscale, which led to improved performance in accelerated weathering conditions.

Keywords: coatings, AFM, ageing, spectroscopy

Procedia PDF Downloads 441
3673 Nanoemulsion Formulation of Ethanolic Extracts of Propolis and Its Antioxidant Activity

Authors: Rachmat Mauludin, Dita Sasri Primaviri, Irda Fidrianny

Abstract:

Propolis contains several antioxidant compounds which can be used in topical application to protect skin against free radical, prevent skin cancer and skin aging. Previous study showed that 70% ethanolic extract of propolis (EEP) provided the greatest antioxidant activity. Since EEP has very small solubility in water, the extract was prepared in nanoemulsion (NE). Nanoemulsion is chosen as cosmetic dosage forms according to its properties namely to decrease the risk of skin’s irritation, increase penetration, prolong its time to remain in our skin, and improve stability. Propolis was extracted using reflux methods and concentrated using rotavapor. EEP was characterized with several tests such as phytochemical screening, density, and antioxidant activity using DPPH method. Optimation of total surfactant, co-surfactant, oil, and amount of EEP that can be included in NE were required to get the best NE formulation. The evaluations included to organoleptic observation, globul size, polydispersity index, morphology using TEM, viscosity, pH, centrifuge, stability, Freeze and Thaw test, radical scavenging activity using DPPH method, and primary irritation test. The yield extracts was 11.12% from raw propolis contained of steroid/triterpenoid, flavonoid, and saponin based on phytochemical screening. EEP had the value of DPPH scavenging activity 61.14% and IC50 0.41629 ppm. The best NE formulation consisted of 26.25% Kolliphor RH40; 8.75% glycerine; 5% rice bran oil; and 3% EEP. NE was transparant, had globul size of 21.9 nm; polydispersity index of 0.338; and pH of 5.67. Based on TEM morphology, NE was almost spherical and has particle size below 50 nm. NE propolis revealed to be physically stable after stability test within 63 days at 25oC, centrifuged for 30 mins at 13.000 rpm, and passed 6 cycles of Freeze and Thaw test without separated. NE propolis reduced 58% of free radical DPPH similar to antioxidant activity of the original extracts. Antioxidant activity of NE propolis is relatively stable after stored for 6 weeks. NE Propolis was proven to be safe by primary irritation test with the value of primary irritation index (OECD) was 0. The best formulation for NE propolis contained of 26.25% Kolliphor RH40; 8.75% glycerine; 5% rice bran oil; and 3% EEP with globul size of 21.9 nm and polydispersity index of 0.338. NE propolis was stable and had antioxidant activity similar to EEP.

Keywords: propolis, antioxidant, nanoemulsion, irritation test

Procedia PDF Downloads 291
3672 Benders Decomposition Approach to Solve the Hybrid Flow Shop Scheduling Problem

Authors: Ebrahim Asadi-Gangraj

Abstract:

Hybrid flow shop scheduling problem (HFS) contains sequencing in a flow shop where, at any stage, there exist one or more related or unrelated parallel machines. This production system is a common manufacturing environment in many real industries, such as the steel manufacturing, ceramic tile manufacturing, and car assembly industries. In this research, a mixed integer linear programming (MILP) model is presented for the hybrid flow shop scheduling problem, in which, the objective consists of minimizing the maximum completion time (makespan). For this purpose, a Benders Decomposition (BD) method is developed to solve the research problem. The proposed approach is tested on some test problems, small to moderate scale. The experimental results show that the Benders decomposition approach can solve the hybrid flow shop scheduling problem in a reasonable time, especially for small and moderate-size test problems.

Keywords: hybrid flow shop, mixed integer linear programming, Benders decomposition, makespan

Procedia PDF Downloads 171
3671 Measurement of Operational and Environmental Performance of the Coal-Fired Power Plants in India by Using Data Envelopment Analysis

Authors: Vijay Kumar Bajpai, Sudhir Kumar Singh

Abstract:

In this study, the performance analyses of the twenty five coal-fired power plants (CFPPs) used for electricity generation are carried out through various data envelopment analysis (DEA) models. Three efficiency indices are defined and pursued. During the calculation of the operational performance, energy and non-energy variables are used as input, and net electricity produced is used as desired output. CO2 emitted to the environment is used as the undesired output in the computation of the pure environmental performance while in Model-3 CO2 emissions is considered as detrimental input in the calculation of operational and environmental performance. Empirical results show that most of the plants are operating in increasing returns to scale region and Mettur plant is efficient one with regards to energy use and environment. The result also indicates that the undesirable output effect is insignificant in the research sample. The present study will provide clues to plant operators towards raising the operational and environmental performance of CFPPs.

Keywords: coal fired power plants, environmental performance, data envelopment analysis, operational performance

Procedia PDF Downloads 440
3670 Oil Recovery Study by Low Temperature Carbon Dioxide Injection in High-Pressure High-Temperature Micromodels

Authors: Zakaria Hamdi, Mariyamni Awang

Abstract:

For the past decades, CO2 flooding has been used as a successful method for enhanced oil recovery (EOR). However, high mobility ratio and fingering effect are considered as important drawbacka of this process. Low temperature injection of CO2 into high temperature reservoirs may improve the oil recovery, but simulating multiphase flow in the non-isothermal medium is difficult, and commercial simulators are very unstable in these conditions. Furthermore, to best of authors’ knowledge, no experimental work was done to verify the results of the simulations and to understand the pore-scale process. In this paper, we present results of investigations on injection of low temperature CO2 into a high-pressure high-temperature micromodel with injection temperature range from 34 to 75 °F. Effect of temperature and saturation changes of different fluids are measured in each case. The results prove the proposed method. The injection of CO2 at low temperatures increased the oil recovery in high temperature reservoirs significantly. Also, CO2 rich phases available in the high temperature system can affect the oil recovery through the better sweep of the oil which is initially caused by penetration of LCO2 inside the system. Furthermore, no unfavorable effect was detected using this method. Low temperature CO2 is proposed to be used as early as secondary recovery.

Keywords: enhanced oil recovery, CO₂ flooding, micromodel studies, miscible flooding

Procedia PDF Downloads 339
3669 Reflection on the Resilience Construction of Megacities Under the Background of Territorial Space Governance

Authors: Xin Jie Li

Abstract:

Due to population agglomeration, huge scale, and complex activities, megacities have become risk centers. To resist the risks brought by development uncertainty, the construction of resilient cities has become a common strategic choice for megacities. As a key link in promoting the modernization of the national governance system and governance capacity, optimizing the layout of national land space that focuses on ecology, production, and life and improving the rationality of spatial resource allocation are conducive to fundamentally promoting the resilience construction of megacities. Therefore, based on the perspective of territorial space governance, this article explores the potential risks faced by the territorial space of megacities and proposes possible paths for the resilience construction of megacities from four aspects: promoting the construction of a resilience system throughout the entire life cycle, constructing a disaster prevention and control system with ecological resilience, creating an industrial spatial pattern with production resilience, and enhancing community resilience to anchor the front line of risk response in megacities.

Keywords: mega cities, potential risks, resilient city construction, territorial and spatial governance

Procedia PDF Downloads 34
3668 Comparing Test Equating by Item Response Theory and Raw Score Methods with Small Sample Sizes on a Study of the ARTé: Mecenas Learning Game

Authors: Steven W. Carruthers

Abstract:

The purpose of the present research is to equate two test forms as part of a study to evaluate the educational effectiveness of the ARTé: Mecenas art history learning game. The researcher applied Item Response Theory (IRT) procedures to calculate item, test, and mean-sigma equating parameters. With the sample size n=134, test parameters indicated “good” model fit but low Test Information Functions and more acute than expected equating parameters. Therefore, the researcher applied equipercentile equating and linear equating to raw scores and compared the equated form parameters and effect sizes from each method. Item scaling in IRT enables the researcher to select a subset of well-discriminating items. The mean-sigma step produces a mean-slope adjustment from the anchor items, which was used to scale the score on the new form (Form R) to the reference form (Form Q) scale. In equipercentile equating, scores are adjusted to align the proportion of scores in each quintile segment. Linear equating produces a mean-slope adjustment, which was applied to all core items on the new form. The study followed a quasi-experimental design with purposeful sampling of students enrolled in a college level art history course (n=134) and counterbalancing design to distribute both forms on the pre- and posttests. The Experimental Group (n=82) was asked to play ARTé: Mecenas online and complete Level 4 of the game within a two-week period; 37 participants completed Level 4. Over the same period, the Control Group (n=52) did not play the game. The researcher examined between group differences from post-test scores on test Form Q and Form R by full-factorial Two-Way ANOVA. The raw score analysis indicated a 1.29% direct effect of form, which was statistically non-significant but may be practically significant. The researcher repeated the between group differences analysis with all three equating methods. For the IRT mean-sigma adjusted scores, form had a direct effect of 8.39%. Mean-sigma equating with a small sample may have resulted in inaccurate equating parameters. Equipercentile equating aligned test means and standard deviations, but resultant skewness and kurtosis worsened compared to raw score parameters. Form had a 3.18% direct effect. Linear equating produced the lowest Form effect, approaching 0%. Using linearly equated scores, the researcher conducted an ANCOVA to examine the effect size in terms of prior knowledge. The between group effect size for the Control Group versus Experimental Group participants who completed the game was 14.39% with a 4.77% effect size attributed to pre-test score. Playing and completing the game increased art history knowledge, and individuals with low prior knowledge tended to gain more from pre- to post test. Ultimately, researchers should approach test equating based on their theoretical stance on Classical Test Theory and IRT and the respective  assumptions. Regardless of the approach or method, test equating requires a representative sample of sufficient size. With small sample sizes, the application of a range of equating approaches can expose item and test features for review, inform interpretation, and identify paths for improving instruments for future study.

Keywords: effectiveness, equipercentile equating, IRT, learning games, linear equating, mean-sigma equating

Procedia PDF Downloads 184
3667 Photocatalytic Packed‐Bed Flow Reactor for Continuous Room‐Temperature Hydrogen Release from Liquid Organic Carriers

Authors: Malek Y. S. Ibrahim, Jeffrey A. Bennett, Milad Abolhasani

Abstract:

Despite the potential of hydrogen (H2) storage in liquid organic carriers to achieve carbon neutrality, the energy required for H2 release and the cost of catalyst recycling has hindered its large-scale adoption. In response, a photo flow reactor packed with rhodium (Rh)/titania (TiO2) photocatalyst was reported for the continuous and selective acceptorless dehydrogenation of 1,2,3,4-tetrahydroquinoline to H2 gas and quinoline under visible light irradiation at room temperature. The tradeoff between the reactor pressure drop and its photocatalytic surface area was resolved by selective in-situ photodeposition of Rh in the photo flow reactor post-packing on the outer surface of the TiO2 microparticles available to photon flux, thereby reducing the optimal Rh loading by 10 times compared to a batch reactor, while facilitating catalyst reuse and regeneration. An example of using quinoline as a hydrogen acceptor to lower the energy of the hydrogen production step was demonstrated via the water-gas shift reaction.

Keywords: hydrogen storage, flow chemistry, photocatalysis, solar hydrogen

Procedia PDF Downloads 81
3666 Degradation of EE2 by Different Consortium of Enriched Nitrifying Activated Sludge

Authors: Pantip Kayee

Abstract:

17α-ethinylestradiol (EE2) is a recalcitrant micropollutant which is found in small amounts in municipal wastewater. But these small amounts still adversely affect for the reproductive function of aquatic organisms. Evidence in the past suggested that full-scale WWTPs equipped with nitrification process enhanced the removal of EE2 in the municipal wastewater. EE2 has been proven to be able to be transformed by ammonia oxidizing bacteria (AOB) via co-metabolism. This research aims to clarify the EE2 degradation pattern by different consortium of ammonia oxidizing microorganism (AOM) including AOA (ammonia oxidizing archaea) and investigate contribution between the existing ammonia monooxygenase (AMO) and new synthesized AOM. The result showed that AOA or AOB of N. oligotropha cluster in enriched nitrifying activated sludge (NAS) from 2mM and 5mM, commonly found in municipal WWTPs, could degrade EE2 in wastewater via co-metabolism. Moreover, the investigation of the contribution between the existing ammonia monooxygenase (AMO) and new synthesized AOM demonstrated that the new synthesized AMO enzyme may perform ammonia oxidation rather than the existing AMO enzyme or the existing AMO enzyme may has a small amount to oxidize ammonia.

Keywords: 17α-ethinylestradiol, nitrification, ammonia oxidizing bacteria, ammonia oxidizing archaea

Procedia PDF Downloads 275
3665 A Dynamic Approach for Evaluating the Climate Change Risks on Building Performance

Authors: X. Lu, T. Lu, S. Javadi

Abstract:

A simple dynamic approach is presented for analyzing thermal and moisture dynamics of buildings, which is of particular relevance to understanding climate change impacts on buildings, including assessment of risks and applications of resilience strategies. With the goal to demonstrate the proposed modeling methodology, to verify the model, and to show that wooden materials provide a mechanism that can facilitate the reduction of moisture risks and be more resilient to global warming, a wooden church equipped with high precision measurement systems was taken as a test building for full-scale time-series measurements. Sensitivity analyses indicate a high degree of accuracy in the model prediction regarding the indoor environment. The model is then applied to a future projection of climate indoors aiming to identify significant environmental factors, the changing temperature and humidity, and effective response to the climate change impacts. The paper suggests that wooden building materials offer an effective and resilient response to anticipated future climate changes.

Keywords: dynamic model, forecast, climate change impact, wooden structure, buildings

Procedia PDF Downloads 134
3664 Data Mining Meets Educational Analysis: Opportunities and Challenges for Research

Authors: Carla Silva

Abstract:

Recent development of information and communication technology enables us to acquire, collect, analyse data in various fields of socioeconomic – technological systems. Along with the increase of economic globalization and the evolution of information technology, data mining has become an important approach for economic data analysis. As a result, there has been a critical need for automated approaches to effective and efficient usage of massive amount of educational data, in order to support institutions to a strategic planning and investment decision-making. In this article, we will address data from several different perspectives and define the applied data to sciences. Many believe that 'big data' will transform business, government, and other aspects of the economy. We discuss how new data may impact educational policy and educational research. Large scale administrative data sets and proprietary private sector data can greatly improve the way we measure, track, and describe educational activity and educational impact. We also consider whether the big data predictive modeling tools that have emerged in statistics and computer science may prove useful in educational and furthermore in economics. Finally, we highlight a number of challenges and opportunities for future research.

Keywords: data mining, research analysis, investment decision-making, educational research

Procedia PDF Downloads 343
3663 Self-denigration in Doctoral Defense Sessions: Scale Development and Validation

Authors: Alireza Jalilifar, Nadia Mayahi

Abstract:

The dissertation defense as a complicated conflict-prone context entails the adoption of elegant interactional strategies, one of which is self-denigration. This study aimed to develop and validate a self-denigration model that fits the context of doctoral defense sessions in applied linguistics. Two focus group discussions provided the basis for developing this conceptual model, which assumed 10 functions for self-denigration, namely good manners, modesty, affability, altruism, assertiveness, diffidence, coercive self-deprecation, evasion, diplomacy, and flamboyance. These functions were used to design a 40-item questionnaire on the attitudes of applied linguists concerning self-denigration in defense sessions. The confirmatory factor analysis of the questionnaire indicated the predictive ability of the measurement model. The findings of this study suggest that self-denigration in doctoral defense sessions is the social representation of the participants’ values, ideas and practices adopted as a negotiation strategy and a conflict management policy for the purpose of establishing harmony and maintaining resilience. This study has implications for doctoral students and academics and illuminates further research on self-denigration in other contexts.

Keywords: academic discourse, politeness, self-denigration, grounded theory, dissertation defense

Procedia PDF Downloads 123
3662 Decomposition of Third-Order Discrete-Time Linear Time-Varying Systems into Its Second- and First-Order Pairs

Authors: Mohamed Hassan Abdullahi

Abstract:

Decomposition is used as a synthesis tool in several physical systems. It can also be used for tearing and restructuring, which is large-scale system analysis. On the other hand, the commutativity of series-connected systems has fascinated the interest of researchers, and its advantages have been emphasized in the literature. The presentation looks into the necessary conditions for decomposing any third-order discrete-time linear time-varying system into a commutative pair of first- and second-order systems. Additional requirements are derived in the case of nonzero initial conditions. MATLAB simulations are used to verify the findings. The work is unique and is being published for the first time. It is critical from the standpoints of synthesis and/or design. Because many design techniques in engineering systems rely on tearing and reconstruction, this is the process of putting together simple components to create a finished product. Furthermore, it is demonstrated that regarding sensitivity to initial conditions, some combinations may be better than others. The results of this work can be extended for the decomposition of fourth-order discrete-time linear time-varying systems into lower-order commutative pairs, as two second-order commutative subsystems or one first-order and one third-order commutative subsystems.

Keywords: commutativity, decomposition, discrete time-varying systems, systems

Procedia PDF Downloads 94
3661 Computational Design, Simulation, and Wind Tunnel Testing of a Stabilator for a Fixed Wing Aircraft

Authors: Kartik Gupta, Umar Khan, Mayur Parab, Dhiraj Chaudhari, Afzal Ansari

Abstract:

The report focuses on the study related to the Design and Simulation of a stabilator (an all-movable horizontal stabilizer) for a fixed-wing aircraft. The project involves the development of a computerized direct optimization procedure for designing an aircraft all-movable stabilator. This procedure evaluates various design variables to synthesize an optimal stabilator that meets specific requirements, including performance, control, stability, strength, and flutter velocity constraints. The work signifies the CFD (Computational Fluid Dynamics) analysis of the airfoils used in the stabilator along with the CFD analysis of the Stabilizer and Stabilator of an aircraft named Thorp- T18 in software like XFLR5 and ANSYS-Fluent. A comparative analysis between a Stabilizer and Stabilator of equal surface area and under the same environmental conditions was done, and the percentage of drag reduced by the Stabilator for the same amount of lift generated as the Stabilizer was also calculated lastly, Wind tunnel testing was performed on a scale down model of the Stabilizer and Stabilator and the results of the Wind tunnel testing were compared with the results of CFD.

Keywords: wind tunnel testing, CFD, stabilizer, stabilator

Procedia PDF Downloads 47
3660 Computer-Aided Exudate Diagnosis for the Screening of Diabetic Retinopathy

Authors: Shu-Min Tsao, Chung-Ming Lo, Shao-Chun Chen

Abstract:

Most diabetes patients tend to suffer from its complication of retina diseases. Therefore, early detection and early treatment are important. In clinical examinations, using color fundus image was the most convenient and available examination method. According to the exudates appeared in the retinal image, the status of retina can be confirmed. However, the routine screening of diabetic retinopathy by color fundus images would bring time-consuming tasks to physicians. This study thus proposed a computer-aided exudate diagnosis for the screening of diabetic retinopathy. After removing vessels and optic disc in the retinal image, six quantitative features including region number, region area, and gray-scale values etc… were extracted from the remaining regions for classification. As results, all six features were evaluated to be statistically significant (p-value < 0.001). The accuracy of classifying the retinal images into normal and diabetic retinopathy achieved 82%. Based on this system, the clinical workload could be reduced. The examination procedure may also be improved to be more efficient.

Keywords: computer-aided diagnosis, diabetic retinopathy, exudate, image processing

Procedia PDF Downloads 254
3659 Strategic Management Model for High Performance Sports Centers

Authors: Jose Ramon Sanabria Navarro, Yahilina Silveira Perez, Valentin Molina Moreno, Digna Dionisia Perez Bravo

Abstract:

The general objective of this research is to conceive a model of strategic direction for Latin American high-performance sports centers for the improvement of their results. The sample is 62 managers, 187 trainers, 2930 athletes and 62 expert researchers from centers in Cuba, Venezuela, Ecuador, Colombia and Argentina, for 3241. The measurement instrument includes 12 key variables in the process of management strategies which are consolidated with the factorial analysis and the ANOVA of a factor through the SPSS 24.0. The reliability of the scale obtained an alpha higher than 0.7 in each sample. In this sense, a model is obtained that taxes the deficiencies detected in the diagnosis, based on the needs of the members of these organizations, considering criteria and theories of the strategic direction in the improvement of the organizational results. The validation of the model for high performance sports centers of the countries analyzed aims to develop joint strategies to generate synergies in their operational mode, which leads to enhance the sports organization.

Keywords: sports organization, information management, decision making, control

Procedia PDF Downloads 121
3658 Bystander Perceived Severity on Traditional versus Cyber Bullying

Authors: C. Smith, T. Goga, T. Hancock

Abstract:

Bullying has been an increasingly prevalent problem among society for decades. Approximately one out of every four students report being bullied at least once during the school year. Additionally, these instances of bullying are often witnessed but not reported by the bystanders, which could be dependent on the type of bullying situation. Thus, the present study aims to investigate any possible perceptual differences which may exist between traditional bullying (i.e., face to face) and cyberbullying from the bystander’s point of view. Undergraduate students were given a bullying scenario to read from either the traditional condition or the cyber condition. They were then asked to rate how severe they perceived this behavior on a Likert based scale. Participants were also asked if they would intervene (yes or no) and what their individual response would be to the witnessed behavior (report/ignore/confront/other). Results indicated that, while there was no significant difference in perceived severity between the two bullying conditions, there was a significant difference in whether or not participants would intervene between the two types of scenarios. A significant effect was also found between the scenarios for response type. Together, these findings suggest that even though individuals may not be aware of how severe they perceive certain bullying behaviors, the responses they exhibit might suggest otherwise.

Keywords: bullying, bystander, cyber, severity, traditional

Procedia PDF Downloads 125
3657 Food Foam Characterization: Rheology, Texture and Microstructure Studies

Authors: Rutuja Upadhyay, Anurag Mehra

Abstract:

Solid food foams/cellular foods are colloidal systems which impart structure, texture and mouthfeel to many food products such as bread, cakes, ice-cream, meringues, etc. Their heterogeneous morphology makes the quantification of structure/mechanical relationships complex. The porous structure of solid food foams is highly influenced by the processing conditions, ingredient composition, and their interactions. Sensory perceptions of food foams are dependent on bubble size, shape, orientation, quantity and distribution and determines the texture of foamed foods. The state and structure of the solid matrix control the deformation behavior of the food, such as elasticity/plasticity or fracture, which in turn has an effect on the force-deformation curves. The obvious step in obtaining the relationship between the mechanical properties and the porous structure is to quantify them simultaneously. Here, we attempt to research food foams such as bread dough, baked bread and steamed rice cakes to determine the link between ingredients and the corresponding effect of each of them on the rheology, microstructure, bubble size and texture of the final product. Dynamic rheometry (SAOS), confocal laser scanning microscopy, flatbed scanning, image analysis and texture profile analysis (TPA) has been used to characterize the foods studied. In all the above systems, there was a common observation that when the mean bubble diameter is smaller, the product becomes harder as evidenced by the increase in storage and loss modulus (G′, G″), whereas when the mean bubble diameter is large the product is softer with decrease in moduli values (G′, G″). Also, the bubble size distribution affects texture of foods. It was found that bread doughs with hydrocolloids (xanthan gum, alginate) aid a more uniform bubble size distribution. Bread baking experiments were done to study the rheological changes and mechanisms involved in the structural transition of dough to crumb. Steamed rice cakes with xanthan gum (XG) addition at 0.1% concentration resulted in lower hardness with a narrower pore size distribution and larger mean pore diameter. Thus, control of bubble size could be an important parameter defining final food texture.

Keywords: food foams, rheology, microstructure, texture

Procedia PDF Downloads 318
3656 An Analytical Survey of Construction Changes: Gaps and Opportunities

Authors: Ehsan Eshtehardian, Saeed Khodaverdi

Abstract:

This paper surveys the studies on construction change and reveals some of the potential future works. A full-scale investigation of change literature, including change definitions, types, causes and effects, and change management systems, is accomplished to explore some of the coming change trends. It is tried to pick up the critical works in each section to deduct a true timeline of construction changes. The findings show that leaping from best practice guides in late 1990s and generic process models in the early 2000s to very advanced modeling environments in the mid-2000s and the early 2010s have made gaps along with opportunities for change researchers in order to develop some more easy and applicable models. Another finding is that there is a compelling similarity between the change and risk prediction models. Therefore, integrating these two concepts, specifically from proactive management point of view, may lead to a synergy and help project teams avoid rework. Also, the findings show that exploitation of cause-effect relationship models, in order to facilitate the dispute resolutions, seems to be an interesting field for future works.

Keywords: construction change, change management systems, dispute resolutions, change literature

Procedia PDF Downloads 284
3655 An Improved Tracking Approach Using Particle Filter and Background Subtraction

Authors: Amir Mukhtar, Dr. Likun Xia

Abstract:

An improved, robust and efficient visual target tracking algorithm using particle filtering is proposed. Particle filtering has been proven very successful in estimating non-Gaussian and non-linear problems. In this paper, the particle filter is used with color feature to estimate the target state with time. Color distributions are applied as this feature is scale and rotational invariant, shows robustness to partial occlusion and computationally efficient. The performance is made more robust by choosing the different (YIQ) color scheme. Tracking is performed by comparison of chrominance histograms of target and candidate positions (particles). Color based particle filter tracking often leads to inaccurate results when light intensity changes during a video stream. Furthermore, background subtraction technique is used for size estimation of the target. The qualitative evaluation of proposed algorithm is performed on several real-world videos. The experimental results demonstrate that the improved algorithm can track the moving objects very well under illumination changes, occlusion and moving background.

Keywords: tracking, particle filter, histogram, corner points, occlusion, illumination

Procedia PDF Downloads 364
3654 Implementation of a Non-Poissonian Model in a Low-Seismicity Area

Authors: Ludivine Saint-Mard, Masato Nakajima, Gloria Senfaute

Abstract:

In areas with low to moderate seismicity, the probabilistic seismic hazard analysis frequently uses a Poisson approach, which assumes independence in time and space of events to determine the annual probability of earthquake occurrence. Nevertheless, in countries with high seismic rate, such as Japan, it is frequently use non-poissonian model which assumes that next earthquake occurrence depends on the date of previous one. The objective of this paper is to apply a non-poissonian models in a region of low to moderate seismicity to get a feedback on the following questions: can we overcome the lack of data to determine some key parameters?, and can we deal with uncertainties to apply largely this methodology on an industrial context?. The Brownian-Passage-Time model was applied to a fault located in France and conclude that even if the lack of data can be overcome with some calculations, the amount of uncertainties and number of scenarios leads to a numerous branches in PSHA, making this method difficult to apply on a large scale of low to moderate seismicity areas and in an industrial context.

Keywords: probabilistic seismic hazard, non-poissonian model, earthquake occurrence, low seismicity

Procedia PDF Downloads 43
3653 The Convolution Recurrent Network of Using Residual LSTM to Process the Output of the Downsampling for Monaural Speech Enhancement

Authors: Shibo Wei, Ting Jiang

Abstract:

Convolutional-recurrent neural networks (CRN) have achieved much success recently in the speech enhancement field. The common processing method is to use the convolution layer to compress the feature space by multiple upsampling and then model the compressed features with the LSTM layer. At last, the enhanced speech is obtained by deconvolution operation to integrate the global information of the speech sequence. However, the feature space compression process may cause the loss of information, so we propose to model the upsampling result of each step with the residual LSTM layer, then join it with the output of the deconvolution layer and input them to the next deconvolution layer, by this way, we want to integrate the global information of speech sequence better. The experimental results show the network model (RES-CRN) we introduce can achieve better performance than LSTM without residual and overlaying LSTM simply in the original CRN in terms of scale-invariant signal-to-distortion ratio (SI-SNR), speech quality (PESQ), and intelligibility (STOI).

Keywords: convolutional-recurrent neural networks, speech enhancement, residual LSTM, SI-SNR

Procedia PDF Downloads 184