Search results for: optimum conditions
625 Delving into Market-Driving Behavior: A Conceptual Roadmap to Delineating Its Key Antecedents and Outcomes
Authors: Konstantinos Kottikas, Vlasis Stathakopoulos, Ioannis G. Theodorakis, Efthymia Kottika
Abstract:
Theorists have argued that Market Orientation is comprised of two facets, namely the Market Driven and the Market Driving components. The present theoretical paper centers on the latter, which to date has been notably under-investigated. The term Market Driving (MD) pertains to influencing the structure of the market, or the behavior of market players in a direction that enhances the competitive edge of the firm. Presently, the main objectives of the paper are the specification of key antecedents and outcomes of Market Driving behavior. Market Driving firms behave proactively, by leading their customers and changing the rules of the game rather than by responding passively to them. Leading scholars were the first to conceptually conceive the notion, followed by some qualitative studies and a limited number of quantitative publications. However, recently, academicians noted that research on the topic remains limited, expressing a strong necessity for further insights. Concerning the key antecedents, top management’s Transformational Leadership (i.e. the form of leadership which influences organizational members by aligning their values, goals and aspirations to facilitate value-consistent behaviors) is one of the key drivers of MD behavior. Moreover, scholars have linked the MD concept with Entrepreneurship. Finally, the role that Employee’s Creativity plays in the development of MD behavior has been theoretically exemplified by a stream of literature. With respect to the key outcomes, it has been demonstrated that MD Behavior positively triggers firm Performance, while theorists argue that it empowers the Competitive Advantage of the firm. Likewise, researchers explicate that MD Behavior produces Radical Innovation. In order to test the robustness of the proposed theoretical framework, a combination of qualitative and quantitative methods is proposed. In particular, the conduction of in-depth interviews with distinguished executives and academicians, accompanied with a large scale quantitative survey will be employed, in order to triangulate the empirical findings. Given that it triggers overall firm’s success, the MD concept is of high importance to managers. Managers can become aware that passively reacting to market conditions is no longer sufficient. On the contrary, behaving proactively, leading the market, and shaping its status quo are new innovative approaches that lead to a paramount competitive posture and Innovation outcomes. This study also exemplifies that managers can foster MD Behavior through Transformational Leadership, Entrepreneurship and recruitment of Creative Employees. To date, the majority of the publications on Market Orientation is unilaterally directed towards the responsive (i.e. the Market Driven) component. The present paper further builds on scholars’ exhortations, and investigates the Market Driving facet, ultimately aspiring to conceptually integrate the somehow fragmented scientific findings, in a holistic framework.Keywords: entrepreneurial orientation, market driving behavior, market orientation
Procedia PDF Downloads 385624 Investigating Seasonal Changes of Urban Land Cover with High Spatio-Temporal Resolution Satellite Data via Image Fusion
Authors: Hantian Wu, Bo Huang, Yuan Zeng
Abstract:
Divisions between wealthy and poor, private and public landscapes are propagated by the increasing economic inequality of cities. While these are the spatial reflections of larger social issues and problems, urban design can at least employ spatial techniques that promote more inclusive rather than exclusive, overlapping rather than segregated, interlinked rather than disconnected landscapes. Indeed, the type of edge or border between urban landscapes plays a critical role in the way the environment is perceived. China experiences rapid urbanization, which poses unpredictable environmental challenges. The urban green cover and water body are under changes, which highly relevant to resident wealth and happiness. However, very limited knowledge and data on their rapid changes are available. In this regard, enhancing the monitoring of urban landscape with high-frequency method, evaluating and estimating the impacts of the urban landscape changes, and understating the driving forces of urban landscape changes can be a significant contribution for urban planning and studying. High-resolution remote sensing data has been widely applied to urban management in China. The map of urban land use map for the entire China of 2018 with 10 meters resolution has been published. However, this research focuses on the large-scale and high-resolution remote sensing land use but does not precisely focus on the seasonal change of urban covers. High-resolution remote sensing data has a long-operation cycle (e.g., Landsat 8 required 16 days for the same location), which is unable to satisfy the requirement of monitoring urban-landscape changes. On the other hand, aerial-remote or unmanned aerial vehicle (UAV) sensing are limited by the aviation-regulation and cost was hardly widely applied in the mega-cities. Moreover, those data are limited by the climate and weather conditions (e.g., cloud, fog), and those problems make capturing spatial and temporal dynamics is always a challenge for the remote sensing community. Particularly, during the rainy season, no data are available even for Sentinel Satellite data with 5 days interval. Many natural events and/or human activities drive the changes of urban covers. In this case, enhancing the monitoring of urban landscape with high-frequency method, evaluating and estimating the impacts of the urban landscape changes, and understanding the mechanism of urban landscape changes can be a significant contribution for urban planning and studying. This project aims to use the high spatiotemporal fusion of remote sensing data to create short-cycle, high-resolution remote sensing data sets for exploring the high-frequently urban cover changes. This research will enhance the long-term monitoring applicability of high spatiotemporal fusion of remote sensing data for the urban landscape for optimizing the urban management of landscape border to promoting the inclusive of the urban landscape to all communities.Keywords: urban land cover changes, remote sensing, high spatiotemporal fusion, urban management
Procedia PDF Downloads 126623 Exploring the Neural Correlates of Different Interaction Types: A Hyperscanning Investigation Using the Pattern Game
Authors: Beata Spilakova, Daniel J. Shaw, Radek Marecek, Milan Brazdil
Abstract:
Hyperscanning affords a unique insight into the brain dynamics underlying human interaction by simultaneously scanning two or more individuals’ brain responses while they engage in dyadic exchange. This provides an opportunity to observe dynamic brain activations in all individuals participating in interaction, and possible interbrain effects among them. The present research aims to provide an experimental paradigm for hyperscanning research capable of delineating among different forms of interaction. Specifically, the goal was to distinguish between two dimensions: (1) interaction structure (concurrent vs. turn-based) and (2) goal structure (competition vs cooperation). Dual-fMRI was used to scan 22 pairs of participants - each pair matched on gender, age, education and handedness - as they played the Pattern Game. In this simple interactive task, one player attempts to recreate a pattern of tokens while the second player must either help (cooperation) or prevent the first achieving the pattern (competition). Each pair played the game iteratively, alternating their roles every round. The game was played in two consecutive sessions: first the players took sequential turns (turn-based), but in the second session they placed their tokens concurrently (concurrent). Conventional general linear model (GLM) analyses revealed activations throughout a diffuse collection of brain regions: The cooperative condition engaged medial prefrontal cortex (mPFC) and posterior cingulate cortex (PCC); in the competitive condition, significant activations were observed in frontal and prefrontal areas, insula cortices and the thalamus. Comparisons between the turn-based and concurrent conditions revealed greater precuneus engagement in the former. Interestingly, mPFC, PCC and insulae are linked repeatedly to social cognitive processes. Similarly, the thalamus is often associated with a cognitive empathy, thus its activation may reflect the need to predict the opponent’s upcoming moves. Frontal and prefrontal activation most likely represent the higher attentional and executive demands of the concurrent condition, whereby subjects must simultaneously observe their co-player and place his own tokens accordingly. The activation of precuneus in the turn-based condition may be linked to self-other distinction processes. Finally, by performing intra-pair correlations of brain responses we demonstrate condition-specific patterns of brain-to-brain coupling in mPFC and PCC. Moreover, the degree of synchronicity in these neural signals related to performance on the game. The present results, then, show that different types of interaction recruit different brain systems implicated in social cognition, and the degree of inter-player synchrony within these brain systems is related to nature of the social interaction.Keywords: brain-to-brain coupling, hyperscanning, pattern game, social interaction
Procedia PDF Downloads 341622 The Inclusive Human Trafficking Checklist: A Dialectical Measurement Methodology
Authors: Maria C. Almario, Pam Remer, Jeff Resse, Kathy Moran, Linda Theander Adam
Abstract:
The identification of victims of human trafficking and consequential service provision is characterized by a significant disconnection between the estimated prevalence of this issue and the number of cases identified. This poses as tremendous problem for human rights advocates as it prevents data collection, information sharing, allocation of resources and opportunities for international dialogues. The current paper introduces the Inclusive Human Trafficking Checklist (IHTC) as a measurement methodology with theoretical underpinnings derived from dialectic theory. The presence of human trafficking in a person’s life is conceptualized as a dynamic and dialectic interaction between vulnerability and exploitation. The current papers explores the operationalization of exploitation and vulnerability, evaluates the metric qualities of the instrument, evaluates whether there are differences in assessment based on the participant’s profession, level of knowledge, and training, and assesses if users of the instrument perceive it as useful. A total of 201 participants were asked to rate three vignettes predetermined by experts to qualify as a either human trafficking case or not. The participants were placed in three conditions: business as usual, utilization of the IHTC with and without training. The results revealed a statistically significant level of agreement between the expert’s diagnostic and the application of the IHTC with an improvement of 40% on identification when compared with the business as usual condition While there was an improvement in identification in the group with training, the difference was found to have a small effect size. Participants who utilized the IHTC showed an increased ability to identify elements of identity-based vulnerabilities as well as elements of fraud, which according to the results, are distinctive variables in cases of human trafficking. In terms of the perceived utility, the results revealed higher mean scores for the groups utilizing the IHTC when compared to the business as usual condition. These findings suggest that the IHTC improves appropriate identification of cases and that it is perceived as a useful instrument. The application of the IHTC as a multidisciplinary instrumentation that can be utilized in legal and human services settings is discussed as a pivotal piece of helping victims restore their sense of dignity, and advocate for legal, physical and psychological reparations. It is noteworthy that this study was conducted with a sample in the United States and later re-tested in Colombia. The implications of the instrument for treatment conceptualization and intervention in human trafficking cases are discussed as opportunities for enhancement of victim well-being, restoration engagement and activism. With the idea that what is personal is also political, we believe that the careful observation and data collection in specific cases can inform new areas of human rights activism.Keywords: exploitation, human trafficking, measurement, vulnerability, screening
Procedia PDF Downloads 331621 Evaluation of Mixing and Oxygen Transfer Performances for a Stirred Bioreactor Containing P. chrysogenum Broths
Authors: A. C. Blaga, A. Cârlescu, M. Turnea, A. I. Galaction, D. Caşcaval
Abstract:
The performance of an aerobic stirred bioreactor for fungal fermentation was analyzed on the basis of mixing time and oxygen mass transfer coefficient, by quantifying the influence of some specific geometrical and operational parameters of the bioreactor, as well as the rheological behavior of Penicillium chrysogenum broth (free mycelia and mycelia aggregates). The rheological properties of the fungus broth, controlled by the biomass concentration, its growth rate, and morphology strongly affect the performance of the bioreactor. Experimental data showed that for both morphological structures the accumulation of fungus biomass induces a significant increase of broths viscosity and modifies the rheological behavior. For lower P. chrysogenum concentrations (both morphological conformations), the mixing time initially increases with aeration rate, reaches a maximum value and decreases. This variation can be explained by the formation of small bubbles, due to the presence of solid phase which hinders the bubbles coalescence, the rising velocity of bubbles being reduced by the high apparent viscosity of fungus broths. By biomass accumulation, the variation of mixing time with aeration rate is gradually changed, the continuous reduction of mixing time with air input flow increase being obtained for 33.5 g/l d.w. P. chrysogenum. Owing to the superior apparent viscosity, which reduces considerably the relative contribution of mechanical agitation to the broths mixing, these phenomena are more pronounced for P. chrysogenum free mycelia. Due to the increase of broth apparent viscosity, the biomass accumulation induces two significant effects on oxygen transfer rate: the diminution of turbulence and perturbation of bubbles dispersion - coalescence equilibrium. The increase of P. chrysogenum free mycelia concentration leads to the decrease of kla values. Thus, for the considered variation domain of the main parameters taken into account, namely air superficial velocity from 8.36 10-4 to 5.02 10-3 m/s and specific power input from 100 to 500 W/m3, kla was reduced for 3.7 times for biomass concentration increase from 4 to 36.5 g/l d.w. The broth containing P. crysogenum mycelia aggregates exhibits a particular behavior from the point of view of oxygen transfer. Regardless of bioreactor operating conditions, the increase of biomass concentration leads initially to the increase of oxygen mass transfer rate, the phenomenon that can be explained by the interaction of pellets with bubbles. The results are in relation with the increase of apparent viscosity of broths corresponding to the variation of biomass concentration between the mentioned limits. Thus, the apparent viscosity of the suspension of fungus mycelia aggregates increased for 44.2 times and fungus free mycelia for 63.9 times for CX increase from 4 to 36.5 g/l d.w. By means of the experimental data, some mathematical correlations describing the influences of the considered factors on mixing time and kla have been proposed. The proposed correlations can be used in bioreactor performance evaluation, optimization, and scaling-up.Keywords: biomass concentration, mixing time, oxygen mass transfer, P. chrysogenum broth, stirred bioreactor
Procedia PDF Downloads 341620 Design Approach to Incorporate Unique Performance Characteristics of Special Concrete
Authors: Devendra Kumar Pandey, Debabrata Chakraborty
Abstract:
The advancement in various concrete ingredients like plasticizers, additives and fibers, etc. has enabled concrete technologists to develop many viable varieties of special concretes in recent decades. Such various varieties of concrete have significant enhancement in green as well as hardened properties of concrete. A prudent selection of appropriate type of concrete can resolve many design and application issues in construction projects. This paper focuses on usage of self-compacting concrete, high early strength concrete, structural lightweight concrete, fiber reinforced concrete, high performance concrete and ultra-high strength concrete in the structures. The modified properties of strength at various ages, flowability, porosity, equilibrium density, flexural strength, elasticity, permeability etc. need to be carefully studied and incorporated into the design of the structures. The paper demonstrates various mixture combinations and the concrete properties that can be leveraged. The selection of such products based on the end use of structures has been proposed in order to efficiently utilize the modified characteristics of these concrete varieties. The study involves mapping the characteristics with benefits and savings for the structure from design perspective. Self-compacting concrete in the structure is characterized by high shuttering loads, better finish, and feasibility of closer reinforcement spacing. The structural design procedures can be modified to specify higher formwork strength, height of vertical members, cover reduction and increased ductility. The transverse reinforcement can be spaced at closer intervals compared to regular structural concrete. It allows structural lightweight concrete structures to be designed for reduced dead load, increased insulation properties. Member dimensions and steel requirement can be reduced proportionate to about 25 to 35 percent reduction in the dead load due to self-weight of concrete. Steel fiber reinforced concrete can be used to design grade slabs without primary reinforcement because of 70 to 100 percent higher tensile strength. The design procedures incorporate reduction in thickness and joint spacing. High performance concrete employs increase in the life of the structures by improvement in paste characteristics and durability by incorporating supplementary cementitious materials. Often, these are also designed for slower heat generation in the initial phase of hydration. The structural designer can incorporate the slow development of strength in the design and specify 56 or 90 days strength requirement. For designing high rise building structures, creep and elasticity properties of such concrete also need to be considered. Lastly, certain structures require a performance under loading conditions much earlier than final maturity of concrete. High early strength concrete has been designed to cater to a variety of usages at various ages as early as 8 to 12 hours. Therefore, an understanding of concrete performance specifications for special concrete is a definite door towards a superior structural design approach.Keywords: high performance concrete, special concrete, structural design, structural lightweight concrete
Procedia PDF Downloads 305619 Blade-Coating Deposition of Semiconducting Polymer Thin Films: Light-To-Heat Converters
Authors: M. Lehtihet, S. Rosado, C. Pradère, J. Leng
Abstract:
Poly(3,4-ethylene dioxythiophene) polystyrene sulfonate (PEDOT: PSS), is a polymer mixture well-known for its semiconducting properties and is widely used in the coating industry for its visible transparency and high electronic conductivity (up to 4600 S/cm) as a transparent non-metallic electrode and in organic light-emitting diodes (OLED). It also possesses strong absorption properties in the Near Infra-Red (NIR) range (λ ranging between 900 nm to 2.5 µm). In the present work, we take advantage of this absorption to explore its potential use as a transparent light-to-heat converter. PEDOT: PSS aqueous dispersions are deposited onto a glass substrate using a blade-coating technique in order to produce uniform coatings with controlled thicknesses ranging in ≈ 400 nm to 2 µm. Blade-coating technique allows us good control of the deposit thickness and uniformity by the tuning of several experimental conditions (blade velocity, evaporation rate, temperature, etc…). This liquid coating technique is a well-known, non-expensive technique to realize thin film coatings on various substrates. For coatings on glass substrates destined to solar insulation applications, the ideal coating would be made of a material able to transmit all the visible range while reflecting the NIR range perfectly, but materials possessing similar properties still have unsatisfactory opacity in the visible too (for example, titanium dioxide nanoparticles). NIR absorbing thin films is a more realistic alternative for such an application. Under solar illumination, PEDOT: PSS thin films heat up due to absorption of NIR light and thus act as planar heaters while maintaining good transparency in the visible range. Whereas they screen some NIR radiation, they also generate heat which is then conducted into the substrate that re-emits this energy by thermal emission in every direction. In order to quantify the heating power of these coatings, a sample (coating on glass) is placed in a black enclosure and illuminated with a solar simulator, a lamp emitting a calibrated radiation very similar to the solar spectrum. The temperature of the rear face of the substrate is measured in real-time using thermocouples and a black-painted Peltier sensor measures the total entering flux (sum of transmitted and re-emitted fluxes). The heating power density of the thin films is estimated from a model of the thin film/glass substrate describing the system, and we estimate the Solar Heat Gain Coefficient (SHGC) to quantify the light-to-heat conversion efficiency of such systems. Eventually, the effect of additives such as dimethyl sulfoxide (DMSO) or optical scatterers (particles) on the performances are also studied, as the first one can alter the IR absorption properties of PEDOT: PSS drastically and the second one can increase the apparent optical path of light within the thin film material.Keywords: PEDOT: PSS, blade-coating, heat, thin-film, Solar spectrum
Procedia PDF Downloads 165618 Influence of Moss Cover and Seasonality on Soil Microbial Biomass and Enzymatic Activity in Different Central Himalayan Temperate Forest Types
Authors: Anshu Siwach, Qianlai Zhuang, Ratul Baishya
Abstract:
Context: This study focuses on the influence of moss cover and seasonality on soil microbial biomass and enzymatic activity in different Central Himalayan temperate forest types. Soil microbial biomass and enzymes are key indicators of microbial communities in soil and provide information on soil properties, microbial status, and organic matter dynamics. The activity of microorganisms in the soil varies depending on the vegetation type and environmental conditions. Therefore, this study aims to assess the effects of moss cover, seasons, and different forest types on soil microbial biomass carbon (SMBC), soil microbial biomass nitrogen (SMBN), and soil enzymatic activity in the Central Himalayas, Uttarakhand, India. Research Aim: The aim of this study is to evaluate the levels of SMBC, SMBN, and soil enzymatic activity in different temperate forest types under the influence of two ground covers (soil with and without moss cover) during the rainy and winter seasons. Question Addressed: This study addresses the following questions: 1. How does the presence of moss cover and seasonality affect soil microbial biomass and enzymatic activity? 2. What is the influence of different forest types on SMBC, SMBN, and enzymatic activity? Methodology: Soil samples were collected from different forest types during the rainy and winter seasons. The study utilizes the chloroform-fumigation extraction method to determine SMBC and SMBN. Standard methodologies are followed to measure enzymatic activities, including dehydrogenase, acid phosphatase, aryl sulfatase, β-glucosidase, phenol oxidase, and urease. Findings: The study reveals significant variations in SMBC, SMBN, and enzymatic activity under different ground covers, within the rainy and winter seasons, and among the forest types. Moss cover positively influences SMBC and enzymatic activity during the rainy season, while soil without moss cover shows higher values during the winter season. Quercus-dominated forests, as well as Cupressus torulosa forests, exhibit higher levels of SMBC and enzymatic activity, while Pinus roxburghii forests show lower levels. Theoretical Importance: The findings highlight the importance of considering mosses in forest management plans to improve soil microbial diversity, enzymatic activity, soil quality, and health. Additionally, this research contributes to understanding the role of lower plants, such as mosses, in influencing ecosystem dynamics. Conclusion: The study concludes that moss cover during the rainy season significantly influences soil microbial biomass and enzymatic activity. Quercus and Cupressus torulosa dominated forests demonstrate higher levels of SMBC and enzymatic activity, indicating the importance of these forest types in sustaining soil microbial diversity and soil health. Including mosses in forest management plans can improve soil quality and overall ecosystem dynamics.Keywords: moss cover, seasons, soil enzymes, soil microbial biomass, temperate forest types
Procedia PDF Downloads 67617 Transient Heat Transfer: Experimental Investigation near the Critical Point
Authors: Andreas Kohlhepp, Gerrit Schatte, Wieland Christoph, Spliethoff Hartmut
Abstract:
In recent years the research of heat transfer phenomena of water and other working fluids near the critical point experiences a growing interest for power engineering applications. To match the highly volatile characteristics of renewable energies, conventional power plants need to shift towards flexible operation. This requires speeding up the load change dynamics of steam generators and their heating surfaces near the critical point. In dynamic load transients, both a high heat flux with an unfavorable ratio to the mass flux and a high difference in fluid and wall temperatures, may cause problems. It may lead to deteriorated heat transfer (at supercritical pressures), dry-out or departure from nucleate boiling (at subcritical pressures), all cases leading to an extensive rise of temperatures. For relevant technical applications, the heat transfer coefficients need to be predicted correctly in case of transient scenarios to prevent damage to the heated surfaces (membrane walls, tube bundles or fuel rods). In transient processes, the state of the art method of calculating the heat transfer coefficients is using a multitude of different steady-state correlations for the momentarily existing local parameters for each time step. This approach does not necessarily reflect the different cases that may lead to a significant variation of the heat transfer coefficients and shows gaps in the individual ranges of validity. An algorithm was implemented to calculate the transient behavior of steam generators during load changes. It is used to assess existing correlations for transient heat transfer calculations. It is also desirable to validate the calculation using experimental data. By the use of a new full-scale supercritical thermo-hydraulic test rig, experimental data is obtained to describe the transient phenomena under dynamic boundary conditions as mentioned above and to serve for validation of transient steam generator calculations. Aiming to improve correlations for the prediction of the onset of deteriorated heat transfer in both, stationary and transient cases the test rig was specially designed for this task. It is a closed loop design with a directly electrically heated evaporation tube, the total heating power of the evaporator tube and the preheater is 1MW. To allow a big range of parameters, including supercritical pressures, the maximum pressure rating is 380 bar. The measurements contain the most important extrinsic thermo-hydraulic parameters. Moreover, a high geometric resolution allows to accurately predict the local heat transfer coefficients and fluid enthalpies.Keywords: departure from nucleate boiling, deteriorated heat transfer, dryout, supercritical working fluid, transient operation of steam generators
Procedia PDF Downloads 224616 Scalable Performance Testing: Facilitating The Assessment Of Application Performance Under Substantial Loads And Mitigating The Risk Of System Failures
Authors: Solanki Ravirajsinh
Abstract:
In the software testing life cycle, failing to conduct thorough performance testing can result in significant losses for an organization due to application crashes and improper behavior under high user loads in production. Simulating large volumes of requests, such as 5 million within 5-10 minutes, is challenging without a scalable performance testing framework. Leveraging cloud services to implement a performance testing framework makes it feasible to handle 5-10 million requests in just 5-10 minutes, helping organizations ensure their applications perform reliably under peak conditions. Implementing a scalable performance testing framework using cloud services and tools like JMeter, EC2 instances (Virtual machine), cloud logs (Monitor errors and logs), EFS (File storage system), and security groups offers several key benefits for organizations. Creating performance test framework using this approach helps optimize resource utilization, effective benchmarking, increased reliability, cost savings by resolving performance issues before the application is released. In performance testing, a master-slave framework facilitates distributed testing across multiple EC2 instances to emulate many concurrent users and efficiently handle high loads. The master node orchestrates the test execution by coordinating with multiple slave nodes to distribute the workload. Slave nodes execute the test scripts provided by the master node, with each node handling a portion of the overall user load and generating requests to the target application or service. By leveraging JMeter's master-slave framework in conjunction with cloud services like EC2 instances, EFS, CloudWatch logs, security groups, and command-line tools, organizations can achieve superior scalability and flexibility in their performance testing efforts. In this master-slave framework, JMeter must be installed on both the master and each slave EC2 instance. The master EC2 instance functions as the "brain," while the slave instances operate as the "body parts." The master directs each slave to execute a specified number of requests. Upon completion of the execution, the slave instances transmit their results back to the master. The master then consolidates these results into a comprehensive report detailing metrics such as the number of requests sent, encountered errors, network latency, response times, server capacity, throughput, and bandwidth. Leveraging cloud services, the framework benefits from automatic scaling based on the volume of requests. Notably, integrating cloud services allows organizations to handle more than 5-10 million requests within 5 minutes, depending on the server capacity of the hosted website or application.Keywords: identify crashes of application under heavy load, JMeter with cloud Services, Scalable performance testing, JMeter master and slave using cloud Services
Procedia PDF Downloads 30615 Promoting Incubation Support to Youth Led Enterprises: A Case Study from Bangladesh to Eradicate Hazardous Child Labour through Microfinance
Authors: Md Maruf Hossain Koli
Abstract:
The issue of child labor is enormous and cannot be ignored in Bangladesh. The problem of child exploitation is a socio-economic reality of Bangladesh. This paper will indicate the causes, consequences, and possibilities of using microfinance as remedies of hazardous child labor in Bangladesh. Poverty is one of the main reasons for children to become child laborers. It is an indication of economic vulnerability, inadequate law, and enforcement system and cultural and social inequities along with the inaccessible and low-quality educational system. An attempt will be made in this paper to explore and analyze child labor scenario in Bangladesh and will explain holistic intervention of BRAC, the largest nongovernmental organization in the world to address child labor through promoting incubation support to youth-led enterprises. A combination of research methods were used to write this paper. These include non-reactive observation in the form of literature review, desk studies as well as reactive observation like site visits and, semi-structured interviews. Hazardous Child labor is a multi-dimensional and complex issue. This paper was guided by the answer following research questions to better understand the current context of hazardous child labor in Bangladesh, especially in Dhaka city. The author attempted to figure out why child labor should be considered as a development issue? Further, it also encountered why child labor in Bangladesh is not being reduced at an expected pace? And finally what could be a sustainable solution to eradicate this situation. One of the most challenging characteristics of child labor is that it interrupts a child’s education and cognitive development hence limiting the building of human capital and fostering intergenerational reproduction of poverty and social exclusion. Children who are working full-time and do not attend school, cannot develop the necessary skills. This leads them and their future generation to remain in poor socio-economic condition as they do not get a better paying job. The vicious cycle of poverty will be reproduced and will slow down sustainable development. The outcome of the research suggests that most of the parents send their children to work to help them to increase family income. In addition, most of the youth engaged in hazardous work want to get training, mentoring and easy access to finance to start their own business. The intervention of BRAC that includes classroom and on the job training, tailored mentoring, health support, access to microfinance and insurance help them to establish startup. This intervention is working in developing business and management capacity through public-private partnerships and technical consulting. Supporting entrepreneurs, improving working conditions with micro, small and medium enterprises and strengthening value chains focusing on youth and children engaged with hazardous child labor.Keywords: child labour, enterprise development, microfinance, youth entrepreneurship
Procedia PDF Downloads 129614 Training for Safe Tree Felling in the Forest with Symmetrical Collaborative Virtual Reality
Authors: Irene Capecchi, Tommaso Borghini, Iacopo Bernetti
Abstract:
One of the most common pieces of equipment still used today for pruning, felling, and processing trees is the chainsaw in forestry. However, chainsaw use highlights dangers and one of the highest rates of accidents in both professional and non-professional work. Felling is proportionally the most dangerous phase, both in severity and frequency, because of the risk of being hit by the plant the operator wants to cut down. To avoid this, a correct sequence of chainsaw cuts must be taught concerning the different conditions of the tree. Virtual reality (VR) makes it possible to virtually simulate chainsaw use without danger of injury. The limitations of the existing applications are as follow. The existing platforms are not symmetrical collaborative because the trainee is only in virtual reality, and the trainer can only see the virtual environment on a laptop or PC, and this results in an inefficient teacher-learner relationship. Therefore, most applications only involve the use of a virtual chainsaw, and the trainee thus cannot feel the real weight and inertia of a real chainsaw. Finally, existing applications simulate only a few cases of tree felling. The objectives of this research were to implement and test a symmetrical collaborative training application based on VR and mixed reality (MR) with the overlap between real and virtual chainsaws in MR. The research and training platform was developed for the Meta quest 2 head-mounted display. The research and training platform application is based on the Unity 3D engine, and Present Platform Interaction SDK (PPI-SDK) developed by Meta. PPI-SDK avoids the use of controllers and enables hand tracking and MR. With the combination of these two technologies, it was possible to overlay a virtual chainsaw with a real chainsaw in MR and synchronize their movements in VR. This ensures that the user feels the weight of the actual chainsaw, tightens the muscles, and performs the appropriate movements during the test allowing the user to learn the correct body posture. The chainsaw works only if the right sequence of cuts is made to felling the tree. Contact detection is done by Unity's physics system, which allows the interaction of objects that simulate real-world behavior. Each cut of the chainsaw is defined by a so-called collider, and the felling of the tree can only occur if the colliders are activated in the right order simulating a safe technique felling. In this way, the user can learn how to use the chainsaw safely. The system is also multiplayer, so the student and the instructor can experience VR together in a symmetrical and collaborative way. The platform simulates the following tree-felling situations with safe techniques: cutting the tree tilted forward, cutting the medium-sized tree tilted backward, cutting the large tree tilted backward, sectioning the trunk on the ground, and cutting branches. The application is being evaluated on a sample of university students through a special questionnaire. The results are expected to test both the increase in learning compared to a theoretical lecture and the immersive and telepresence of the platform.Keywords: chainsaw, collaborative symmetric virtual reality, mixed reality, operator training
Procedia PDF Downloads 107613 Pesticides Monitoring in Surface Waters of the São Paulo State, Brazil
Authors: Fabio N. Moreno, Letícia B. Marinho, Beatriz D. Ruiz, Maria Helena R. B. Martins
Abstract:
Brazil is a top consumer of pesticides worldwide, and the São Paulo State is one of the highest consumers among the Brazilian federative states. However, representative data about the occurrence of pesticides in surface waters of the São Paulo State is scarce. This paper aims to present the results of pesticides monitoring executed within the Water Quality Monitoring Network of CETESB (The Environmental Agency of the São Paulo State) between the 2018-2022 period. Surface water sampling points (21 to 25) were selected within basins of predominantly agricultural land-use (5 to 85% of cultivated areas). The samples were collected throughout the year, including high-flow and low-flow conditions. The frequency of sampling varied between 6 to 4 times per year. Selection of pesticide molecules for monitoring followed a prioritizing process from EMBRAPA (Brazilian Agricultural Research Corporation) databases of pesticide use. Pesticides extractions in aqueous samples were performed according to USEPA 3510C and 3546 methods following quality assurance and quality control procedures. Determination of pesticides in water (ng L-1) extracts were performed by high-performance liquid chromatography coupled with mass spectrometry (HPLC-MS) and by gas chromatography with nitrogen phosphorus (GC-NPD) and electron capture detectors (GC-ECD). The results showed higher frequencies (20- 65%) in surface water samples for Carbendazim (fungicide), Diuron/Tebuthiuron (herbicides) and Fipronil/Imidaclopride (insecticides). The frequency of observations for these pesticides were generally higher in monitoring points located in sugarcane cultivated areas. The following pesticides were most frequently quantified above the Aquatic life benchmarks for freshwater (USEPA Office of Pesticide Programs, 2023) or Brazilian Federal Regulatory Standards (CONAMA Resolution no. 357/2005): Atrazine, Imidaclopride, Carbendazim, 2,4D, Fipronil, and Chlorpiryfos. Higher median concentrations for Diuron and Tebuthiuron in the rainy months (october to march) indicated pesticide transport through surface runoff. However, measurable concentrations in the dry season (april to september) for Fipronil and Imidaclopride also indicates pathways related to subsurface or base flow discharge after pesticide soil infiltration and leaching or dry deposition following pesticide air spraying. With exception to Diuron, no temporal trends related to median concentrations of the most frequently quantified pesticides were observed. These results are important to assist policymakers in the development of strategies aiming at reducing pesticides migration to surface waters from agricultural areas. Further studies will be carried out in selected points to investigate potential risks as a result of pesticides exposure on aquatic biota.Keywords: pesticides monitoring, são paulo state, water quality, surface waters
Procedia PDF Downloads 59612 Pricing Effects on Equitable Distribution of Forest Products and Livelihood Improvement in Nepalese Community Forestry
Authors: Laxuman Thakuri
Abstract:
Despite the large number of in-depth case studies focused on policy analysis, institutional arrangement, and collective action of common property resource management; how the local institutions take the pricing decision of forest products in community forest management and what kinds of effects produce it, the answers of these questions are largely silent among the policy-makers and researchers alike. The study examined how the local institutions take the pricing decision of forest products in the lowland community forestry of Nepal and how the decisions affect to equitable distribution of benefits and livelihood improvement which are also objectives of Nepalese community forestry. The study assumes that forest products pricing decisions have multiple effects on equitable distribution and livelihood improvement in the areas having heterogeneous socio-economic conditions. The dissertation was carried out at four community forests of lowland, Nepal that has characteristics of high value species, matured-experience of community forest management and better record-keeping system of forest products production, pricing and distribution. The questionnaire survey, individual to group discussions and direct field observation were applied for data collection from the field, and Lorenz curve, gini-coefficient, χ²-text, and SWOT (Strong, Weak, Opportunity, and Threat) analysis were performed for data analysis and results interpretation. The dissertation demonstrates that the low pricing strategy of high-value forest products was supposed crucial to increase the access of socio-economically weak households, and to and control over the important forest products such as timber, but found counter productive as the strategy increased the access of socio-economically better-off households at higher rate. In addition, the strategy contradicts to collect a large-scale community fund and carry out livelihood improvement activities as per the community forestry objectives. The crucial part of the study is despite the fact of low pricing strategy; the timber alone contributed large part of community fund collection. The results revealed close relation between pricing decisions and livelihood objectives. The action research result shows that positive price discrimination can slightly reduce the prevailing inequality and increase the fund. However, it lacks to harness the full price of forest products and collects a large-scale community fund. For broader outcomes of common property resource management in terms of resource sustainability, equity, and livelihood opportunity, the study suggests local institutions to harness the full price of resource products with respect to the local market.Keywords: community, equitable, forest, livelihood, socioeconomic, Nepal
Procedia PDF Downloads 537611 Combining Patients Pain Scores Reports with Functionality Scales in Chronic Low Back Pain Patients
Authors: Ivana Knezevic, Kenneth D. Candido, N. Nick Knezevic
Abstract:
Background: While pain intensity scales remain generally accepted assessment tool, and the numeric pain rating score is highly subjective, we nevertheless rely on them to make a judgment about treatment effects. Misinterpretation of pain can lead practitioners to underestimate or overestimate the patient’s medical condition. The purpose of this study was to analyze how the numeric rating pain scores given by patients with low back pain correlate with their functional activity levels. Methods: We included 100 consecutive patients with radicular low back pain (LBP) after the Institutional Review Board (IRB) approval. Pain scores, numeric rating scale (NRS) responses at rest and in the movement,Oswestry Disability Index (ODI) questionnaire answers were collected 10 times through 12 months. The ODI questionnaire is targeting a patient’s activities and physical limitations as well as a patient’s ability to manage stationary everyday duties. Statistical analysis was performed by using SPSS Software version 20. Results: The average duration of LBP was 14±22 months at the beginning of the study. All patients included in the study were between 24 and 78 years old (average 48.85±14); 56% women and 44% men. Differences between ODI and pain scores in the range from -10% to +10% were considered “normal”. Discrepancies in pain scores were graded as mild between -30% and -11% or +11% and +30%; moderate between -50% and -31% and +31% and +50% and severe if differences were more than -50% or +50%. Our data showed that pain scores at rest correlate well with ODI in 65% of patients. In 30% of patients mild discrepancies were present (negative in 21% and positive in 9%), 4% of patients had moderate and 1% severe discrepancies. “Negative discrepancy” means that patients graded their pain scores much higher than their functional ability, and most likely exaggerated their pain. “Positive discrepancy” means that patients graded their pain scores much lower than their functional ability, and most likely underrated their pain. Comparisons between ODI and pain scores during movement showed normal correlation in only 39% of patients. Mild discrepancies were present in 42% (negative in 39% and positive in 3%); moderate in 14% (all negative), and severe in 5% (all negative) of patients. A 58% unknowingly exaggerated their pain during movement. Inconsistencies were equal in male and female patients (p=0.606 and p=0.928).Our results showed that there was a negative correlation between patients’ satisfaction and the degree of reporting pain inconsistency. Furthermore, patients talking opioids showed more discrepancies in reporting pain intensity scores than did patients taking non-opioid analgesics or not taking medications for LBP (p=0.038). There was a highly statistically significant correlation between morphine equivalents doses and the level of discrepancy (p<0.0001). Conclusion: We have put emphasis on the patient education in pain evaluation as a vital step in accurate pain level reporting. We have showed a direct correlation with patients’ satisfaction. Furthermore, we must identify other parameters in defining our patients’ chronic pain conditions, such as functionality scales, quality of life questionnaires, etc., and should move away from an overly simplistic subjective rating scale.Keywords: pain score, functionality scales, low back pain, lumbar
Procedia PDF Downloads 235610 Harnessing Artificial Intelligence for Early Detection and Management of Infectious Disease Outbreaks
Authors: Amarachukwu B. Isiaka, Vivian N. Anakwenze, Chinyere C. Ezemba, Chiamaka R. Ilodinso, Chikodili G. Anaukwu, Chukwuebuka M. Ezeokoli, Ugonna H. Uzoka
Abstract:
Infectious diseases continue to pose significant threats to global public health, necessitating advanced and timely detection methods for effective outbreak management. This study explores the integration of artificial intelligence (AI) in the early detection and management of infectious disease outbreaks. Leveraging vast datasets from diverse sources, including electronic health records, social media, and environmental monitoring, AI-driven algorithms are employed to analyze patterns and anomalies indicative of potential outbreaks. Machine learning models, trained on historical data and continuously updated with real-time information, contribute to the identification of emerging threats. The implementation of AI extends beyond detection, encompassing predictive analytics for disease spread and severity assessment. Furthermore, the paper discusses the role of AI in predictive modeling, enabling public health officials to anticipate the spread of infectious diseases and allocate resources proactively. Machine learning algorithms can analyze historical data, climatic conditions, and human mobility patterns to predict potential hotspots and optimize intervention strategies. The study evaluates the current landscape of AI applications in infectious disease surveillance and proposes a comprehensive framework for their integration into existing public health infrastructures. The implementation of an AI-driven early detection system requires collaboration between public health agencies, healthcare providers, and technology experts. Ethical considerations, privacy protection, and data security are paramount in developing a framework that balances the benefits of AI with the protection of individual rights. The synergistic collaboration between AI technologies and traditional epidemiological methods is emphasized, highlighting the potential to enhance a nation's ability to detect, respond to, and manage infectious disease outbreaks in a proactive and data-driven manner. The findings of this research underscore the transformative impact of harnessing AI for early detection and management, offering a promising avenue for strengthening the resilience of public health systems in the face of evolving infectious disease challenges. This paper advocates for the integration of artificial intelligence into the existing public health infrastructure for early detection and management of infectious disease outbreaks. The proposed AI-driven system has the potential to revolutionize the way we approach infectious disease surveillance, providing a more proactive and effective response to safeguard public health.Keywords: artificial intelligence, early detection, disease surveillance, infectious diseases, outbreak management
Procedia PDF Downloads 68609 Exploring Valproic Acid (VPA) Analogues Interactions with HDAC8 Involved in VPA Mediated Teratogenicity: A Toxicoinformatics Analysis
Authors: Sakshi Piplani, Ajit Kumar
Abstract:
Valproic acid (VPA) is the first synthetic therapeutic agent used to treat epileptic disorders, which account for affecting nearly 1% world population. Teratogenicity caused by VPA has prompted the search for next generation drug with better efficacy and lower side effects. Recent studies have posed HDAC8 as direct target of VPA that causes the teratogenic effect in foetus. We have employed molecular dynamics (MD) and docking simulations to understand the binding mode of VPA and their analogues onto HDAC8. A total of twenty 3D-structures of human HDAC8 isoforms were selected using BLAST-P search against PDB. Multiple sequence alignment was carried out using ClustalW and PDB-3F07 having least missing and mutated regions was selected for study. The missing residues of loop region were constructed using MODELLER and energy was minimized. A set of 216 structural analogues (>90% identity) of VPA were obtained from Pubchem and ZINC database and their energy was optimized with Chemsketch software using 3-D CHARMM-type force field. Four major neurotransmitters (GABAt, SSADH, α-KGDH, GAD) involved in anticonvulsant activity were docked with VPA and its analogues. Out of 216 analogues, 75 were selected on the basis of lower binding energy and inhibition constant as compared to VPA, thus predicted to have anti-convulsant activity. Selected hHDAC8 structure was then subjected to MD Simulation using licenced version YASARA with AMBER99SB force field. The structure was solvated in rectangular box of TIP3P. The simulation was carried out with periodic boundary conditions and electrostatic interactions and treated with Particle mesh Ewald algorithm. pH of system was set to 7.4, temperature 323K and pressure 1atm respectively. Simulation snapshots were stored every 25ps. The MD simulation was carried out for 20ns and pdb file of HDAC8 structure was saved every 2ns. The structures were analysed using castP and UCSF Chimera and most stabilized structure (20ns) was used for docking study. Molecular docking of 75 selected VPA-analogues with PDB-3F07 was performed using AUTODOCK4.2.6. Lamarckian Genetic Algorithm was used to generate conformations of docked ligand and structure. The docking study revealed that VPA and its analogues have more affinity towards ‘hydrophobic active site channel’, due to its hydrophobic properties and allows VPA and their analogues to take part in van der Waal interactions with TYR24, HIS42, VAL41, TYR20, SER138, TRP137 while TRP137 and SER138 showed hydrogen bonding interaction with VPA-analogues. 14 analogues showed better binding affinity than VPA. ADMET SAR server was used to predict the ADMET properties of selected VPA analogues for predicting their druggability. On the basis of ADMET screening, 09 molecules were selected and are being used for in-vivo evaluation using Danio rerio model.Keywords: HDAC8, docking, molecular dynamics simulation, valproic acid
Procedia PDF Downloads 255608 Moderate Electric Field and Ultrasound as Alternative Technologies to Raspberry Juice Pasteurization Process
Authors: Cibele F. Oliveira, Debora P. Jaeschke, Rodrigo R. Laurino, Amanda R. Andrade, Ligia D. F. Marczak
Abstract:
Raspberry is well-known as a good source of phenolic compounds, mainly anthocyanin. Some studies pointed out the importance of these bioactive compounds consumption, which is related to the decrease of the risk of cancer and cardiovascular diseases. The most consumed raspberry products are juices, yogurts, ice creams and jellies and, to ensure the safety of these products, raspberry is commonly pasteurized, for enzyme and microorganisms inactivation. Despite being efficient, the pasteurization process can lead to degradation reactions of the bioactive compounds, decreasing the products healthy benefits. Therefore, the aim of the present work was to evaluate moderate electric field (MEF) and ultrasound (US) technologies application on the pasteurization process of raspberry juice and compare the results with conventional pasteurization process. For this, phenolic compounds, anthocyanin content and physical-chemical parameters (pH, color changes, titratable acidity) of the juice were evaluated before and after the treatments. Moreover, microbiological analyses of aerobic mesophiles microorganisms, molds and yeast were performed in the samples before and after the treatments, to verify the potential of these technologies to inactivate microorganisms. All the pasteurization processes were performed in triplicate for 10 min, using a cylindrical Pyrex® vessel with a water jacket. The conventional pasteurization was performed at 90 °C using a hot water bath connected to the extraction cell. The US assisted pasteurization was performed using 423 and 508 W cm-2 (75 and 90 % of ultrasound intensity). It is important to mention that during US application the temperature was kept below 35 °C; for this, the water jacket of the extraction cell was connected to a water bath with cold water. MEF assisted pasteurization experiments were performed similarly to US experiments, using 25 and 50 V. Control experiments were performed at the maximum temperature of US and MEF experiments (35 °C) to evaluate only the effect of the aforementioned technologies on the pasteurization. The results showed that phenolic compounds concentration in the juice was not affected by US and MEF application. However, it was observed that the US assisted pasteurization, performed at the highest intensity, decreased anthocyanin content in 33 % (compared to in natura juice). This result was possibly due to the cavitation phenomena, which can lead to free radicals formation and accumulation on the medium; these radicals can react with anthocyanin decreasing the content of these antioxidant compounds in the juice. Physical-chemical parameters did not present statistical differences for samples before and after the treatments. Microbiological analyses results showed that all the pasteurization treatments decreased the microorganism content in two logarithmic cycles. However, as values were lower than 1000 CFU mL-1 it was not possible to verify the efficacy of each treatment. Thus, MEF and US were considered as potential alternative technologies for pasteurization process, once in the right conditions the application of the technologies decreased microorganism content in the juice and did not affected phenolic and anthocyanin content, as well as physical-chemical parameters. However, more studies are needed regarding the influence of MEF and US processes on microorganisms’ inactivation.Keywords: MEF, microorganism inactivation, anthocyanin, phenolic compounds
Procedia PDF Downloads 242607 Safety and Maternal Anxiety in Mother's and Baby's Sleep: Cross-sectional Study
Authors: Rayanne Branco Dos Santos Lima, Lorena Pinheiro Barbosa, Kamila Ferreira Lima, Victor Manuel Tegoma Ruiz, Monyka Brito Lima Dos Santos, Maria Wendiane Gueiros Gaspar, Luzia Camila Coelho Ferreira, Leandro Cardozo Dos Santos Brito, Deyse Maria Alves Rocha
Abstract:
Introduction: The lack of regulation of the baby's sleep-wake pattern in the first years of life affects the health of thousands of women. Maternal sleep deprivation can trigger or aggravate psychosomatic problems such as depression, anxiety and stress that can directly influence maternal safety, with consequences for the baby's and mother's sleep. Such conditions can affect the family's quality of life and child development. Objective: To correlate maternal security with maternal state anxiety scores and the mother's and baby's total sleep time. Method: Cross-sectional study carried out with 96 mothers of babies aged 10 to 24 months, accompanied by nursing professionals linked to a Federal University in Northeast Brazil. Study variables were maternal security, maternal state anxiety scores, infant latency and sleep time, and total nocturnal sleep time of mother and infant. Maternal safety was calculated using a four-point Likert scale (1=not at all safe, 2=somewhat safe, 3=very safe, 4=completely safe). Maternal anxiety was measured by State-Trait Anxiety Inventory, state-anxiety subscale whose scores vary from 20 to 80 points, and the higher the score, the higher the anxiety levels. Scores below 33 are considered mild; from 33 to 49, moderate and above 49, high. As for the total nocturnal sleep time, values between 7-9 hours of sleep were considered adequate for mothers, and values between 9-12 hours for the baby, according to the guidelines of the National Sleep Foundation. For the sleep latency time, a time equal to or less than 20 min was considered adequate. It is noteworthy that the latency time and the time of night sleep of the mother and the baby were obtained by the mother's subjective report. To correlate the data, Spearman's correlation was used in the statistical package R version 3.6.3. Results: 96 women and babies participated, aged 22 to 38 years (mean 30.8) and 10 to 24 months (mean 14.7), respectively. The average of maternal security was 2.89 (unsafe); Mean maternal state anxiety scores were 43.75 (moderate anxiety). The babies' average sleep latency time was 39.6 min (>20 min). The mean sleep times of the mother and baby were, respectively, 6h and 42min and 8h and 19min, both less than the recommended nocturnal sleep time. Maternal security was positively correlated with maternal state anxiety scores (rh=266, p=0.009) and negatively correlated with infant sleep latency (rh= -0.30. P=0.003). Baby sleep time was positively correlated with maternal sleep time. (rh 0.46, p<0.001). Conclusion: The more secure the mothers considered themselves, the higher the anxiety scores and the shorter the baby's sleep latency. Also, the longer the baby sleeps, the longer the mother sleeps. Thus, interventions are needed to promote the quality and efficiency of sleep for both mother and baby.Keywords: sleep, anxiety, infant, mother-child relations
Procedia PDF Downloads 104606 (De)Motivating Mitigation Behavior: An Exploratory Framing Study Applied to Sustainable Food Consumption
Authors: Youval Aberman, Jason E. Plaks
Abstract:
This research provides initial evidence that self-efficacy of mitigation behavior – the belief that one’s action can make a difference on the environment – can be implicitly inferred from the way numerical information is presented in environmental messages. The scientific community sees climate change as a pressing issue, but the general public tends to construe climate change as an abstract phenomenon that is psychologically distant. As such, a main barrier to pro-environmental behavior is that individuals often believe that their own behavior makes little to no difference on the environment. When it comes to communicating how the behavior of billions of individuals affects global climate change, it might appear valuable to aggregate those billions and present the shocking enormity of the resources individuals consume. This research provides initial evidence that, in fact, this strategy is ineffective; presenting large-scale aggregate data dilutes the contribution of the individual and impedes individuals’ motivation to act pro-environmentally. The high-impact, underrepresented behavior of eating a sustainable diet was chosen for the present studies. US Participants (total N = 668) were recruited online for a study on ‘meat and the environment’ and received information about some of resources used in meat production – water, CO2e, and feed – with numerical information that varied in its frame of reference. A ‘Nation’ frame of reference discussed the resources used in the beef industry, such as the billions of CO2e released daily by the industry, while a ‘Meal’ frame of reference presented the resources used in the production of a single beef dish. Participants completed measures of pro-environmental attitudes and behavioral intentions, either immediately (Study 1) or two days (Study 2) after reading the information. In Study 2 (n = 520) participants also indicated whether they consumed less or more meat than usual. Study 2 included an additional control condition that contained no environmental data. In Study 1, participants who read about meat production at a national level, compared to at a meal level, reported lower motivation to make ecologically conscious dietary choices and reported lower behavioral intention to change their diet. In Study 2, a similar pattern emerged, with the added insight that the Nation condition, but not the Meal condition, deviated from the control condition. Participants across conditions, on average, reduced their meat consumption in the duration of Study 2, except those in the Nation condition who remained unchanged. Presenting nation-wide consequences of human behavior is a double-edged sword: Framing in a large scale might reveal the relationship between collective actions and environmental issues, but it hinders the belief that individual actions make a difference.Keywords: climate change communication, environmental concern, meat consumption, motivation
Procedia PDF Downloads 159605 Decomposition of the Discount Function Into Impatience and Uncertainty Aversion. How Neurofinance Can Help to Understand Behavioral Anomalies
Authors: Roberta Martino, Viviana Ventre
Abstract:
Intertemporal choices are choices under conditions of uncertainty in which the consequences are distributed over time. The Discounted Utility Model is the essential reference for describing the individual in the context of intertemporal choice. The model is based on the idea that the individual selects the alternative with the highest utility, which is calculated by multiplying the cardinal utility of the outcome, as if the reception were instantaneous, by the discount function that determines a decrease in the utility value according to how the actual reception of the outcome is far away from the moment the choice is made. Initially, the discount function was assumed to have an exponential trend, whose decrease over time is constant, in line with a profile of a rational investor described by classical economics. Instead, empirical evidence called for the formulation of alternative, hyperbolic models that better represented the actual actions of the investor. Attitudes that do not comply with the principles of classical rationality are termed anomalous, i.e., difficult to rationalize and describe through normative models. The development of behavioral finance, which describes investor behavior through cognitive psychology, has shown that deviations from rationality are due to the limited rationality condition of human beings. What this means is that when a choice is made in a very difficult and information-rich environment, the brain does a compromise job between the cognitive effort required and the selection of an alternative. Moreover, the evaluation and selection phase of the alternative, the collection and processing of information, are dynamics conditioned by systematic distortions of the decision-making process that are the behavioral biases involving the individual's emotional and cognitive system. In this paper we present an original decomposition of the discount function to investigate the psychological principles of hyperbolic discounting. It is possible to decompose the curve into two components: the first component is responsible for the smaller decrease in the outcome as time increases and is related to the individual's impatience; the second component relates to the change in the direction of the tangent vector to the curve and indicates how much the individual perceives the indeterminacy of the future indicating his or her aversion to uncertainty. This decomposition allows interesting conclusions to be drawn with respect to the concept of impatience and the emotional drives involved in decision-making. The contribution that neuroscience can make to decision theory and inter-temporal choice theory is vast as it would allow the description of the decision-making process as the relationship between the individual's emotional and cognitive factors. Neurofinance is a discipline that uses a multidisciplinary approach to investigate how the brain influences decision-making. Indeed, considering that the decision-making process is linked to the activity of the prefrontal cortex and amygdala, neurofinance can help determine the extent to which abnormal attitudes respect the principles of rationality.Keywords: impatience, intertemporal choice, neurofinance, rationality, uncertainty
Procedia PDF Downloads 130604 Ambient Factors in the Perception of Crowding in Public Transport
Authors: John Zacharias, Bin Wang
Abstract:
Travel comfort is increasingly seen as crucial to effecting the switch from private motorized modes to public transit. Surveys suggest that travel comfort is closely related to perceived crowding, that may involve lack of available seating, difficulty entering and exiting, jostling and other physical contacts with strangers. As found in studies on environmental stress, other factors may moderate perceptions of crowding–in this case, we hypothesize that the ambient environment may play a significant role. Travel comfort was measured by applying a structured survey to randomly selected passengers (n=369) on 3 lines of the Beijing metro on workdays. Respondents were standing with all seats occupied and with car occupancy at 14 levels. A second research assistant filmed the metro car while passengers were interviewed, to obtain the total number of passengers. Metro lines 4, 6 and 10 were selected that travel through the central city north-south, east-west and circumferentially. Respondents evaluated the following factors: crowding, noise, smell, air quality, temperature, illumination, vibration and perceived safety as they experienced them at the time of interview, and then were asked to rank these 8 factors according to their importance for their travel comfort. Evaluations were semantic differentials on a 7-point scale from highly unsatisfactory (-3) to highly satisfactory (+3). The control variables included age, sex, annual income and trip purpose. Crowding was assessed most negatively, with 41% of the scores between -3 and -2. Noise and air quality were also assessed negatively, with two-thirds of the evaluations below 0. Illumination was assessed most positively, followed by crime, vibration and temperature, all scoring at indifference (0) or slightly positive. Perception of crowding was linearly and positively related to the number of passengers in the car. Linear regression tested the impact of ambient environmental factors on perception of crowding. Noise intensity accounted for more than the actual number of individuals in the car in the perception of crowding, with smell also contributing. Other variables do not interact with the crowding variable although the evaluations are distinct. In all, only one-third of the perception of crowding (R2=.154) is explained by the number of people, with the other ambient environmental variables accounting for two-thirds of the variance (R2=.316). However, when ranking the factors by their importance to travel comfort, perceived crowding made up 69% of the first rank, followed by noise at 11%. At rank 2, smell dominates (25%), followed by noise and air quality (17%). Commuting to work induces significantly lower evaluations of travel comfort with shopping the most positive. Clearly, travel comfort is particularly important to commuters. Moreover, their perception of crowding while travelling on metro is highly conditioned by the ambient environment in the metro car. Focussing attention on the ambient environmental conditions of the metro is an effective way to address the primary concerns of travellers with overcrowding. In general, the strongly held opinions on travel comfort require more attention in the effort to induce ridership in public transit.Keywords: ambient environment, mass rail transit, public transit, travel comfort
Procedia PDF Downloads 265603 Phage Display-Derived Vaccine Candidates for Control of Bovine Anaplasmosis
Authors: Itzel Amaro-Estrada, Eduardo Vergara-Rivera, Virginia Juarez-Flores, Mayra Cobaxin-Cardenas, Rosa Estela Quiroz, Jesus F. Preciado, Sergio Rodriguez-Camarillo
Abstract:
Bovine anaplasmosis is an infectious, tick-borne disease caused mainly by Anaplasma marginale; typical signs include anemia, fever, abortion, weight loss, decreased milk production, jaundice, and potentially death. Sick bovine can recover when antibiotics are administered; however, it usually remains as carrier for life, being a risk of infection for susceptible cattle. Anaplasma marginale is an obligate intracellular Gram-negative bacterium with genetic composition highly diverse among geographical isolates. There are currently no vaccines fully effective against bovine anaplasmosis; therefore, the economic losses due to disease are present. Vaccine formulation became a hard task for several pathogens as Anaplasma marginale, but peptide-based vaccines are an interesting proposal way to induce specific responses. Phage-displayed peptide libraries have been proved one of the most powerful technologies for identifying specific ligands. Screening of these peptides libraries is also a tool for studying interactions between proteins or peptides. Thus, it has allowed the identification of ligands recognized by polyclonal antiserums, and it has been successful for the identification of relevant epitopes in chronic diseases and toxicological conditions. Protective immune response to bovine anaplasmosis includes high levels of immunoglobulins subclass G2 (IgG2) but not subclass IgG1. Therefore, IgG2 from the serum of protected bovine can be useful to identify ligands, which can be part of an immunogen for cattle. In this work, phage display random peptide library Ph.D. ™ -12 was incubating with IgG2 or blood sera of immunized bovines against A. marginale as targets. After three rounds of biopanning, several candidates were selected for additional analysis. Subsequently, their reactivity with sera immunized against A. marginale, as well as with positive and negative sera to A. marginale was evaluated by immunoassays. A collection of recognized peptides tested by ELISA was generated. More than three hundred phage-peptides were separately evaluated against molecules which were used during panning. At least ten different peptides sequences were determined from their nucleotide composition. In this approach, three phage-peptides were selected by their binding and affinity properties. In the case of the development of vaccines or diagnostic reagents, it is important to evaluate the immunogenic and antigenic properties of the peptides. Immunogenic in vitro and in vivo behavior of peptides will be assayed as synthetic and as phage-peptide for to determinate their vaccine potential. Acknowledgment: This work was supported by grant SEP-CONACYT 252577 given to I. Amaro-Estrada.Keywords: bovine anaplasmosis, peptides, phage display, veterinary vaccines
Procedia PDF Downloads 143602 The Development of Modernist Chinese Architecture from the Perspective of Cultural Regionalism in Taiwan: Spatial Practice by the Fieldoffice Architects
Authors: Yilei Yu
Abstract:
Modernism, emerging in the Western world of the 20th century, attempted to create a universal international style, which pulled the architectural and social systems created by classicism back to an initial pure state. However, out of the introspection of the Modernism, Regionalism attempted to restore a humanistic environment and create flexible buildings during the 1950s. Meanwhile, as the first generation of architects came back, the wind of the Regionalism blew to Taiwan. However, with the increasing of political influence and the tightening of free creative space, from the second half of the 1950s to the 1980s, the "real" Regional Architecture, which should have taken roots in Taiwan, becomes the "fake" Regional Architecture filled with the oriental charm. Through the Comparative Method, which includes description, interpretation, juxtaposition, and comparison, this study analyses the difference of the style of the Modernist Chinese Architecture between the period before the 1980s and the after. The paper aims at exploring the development of Regionalism Architecture in Taiwan, which includes three parts. First, the burgeoning period of the "modernist Chinese architecture" in Taiwan was the beginning of the Chinese Nationalist Party's coming to Taiwan to consolidate political power. The architecture of the "Ming and Qing Dynasty Palace Revival Style" dominated the architectural circles in Taiwan. These superficial "regional buildings" have nearly no combination with the local customs of Taiwan, which is difficult to evoke the social identity. Second, in the late 1970s, the second generation of architects headed by Baode Han began focusing on the research and preservation of traditional Taiwanese architecture, and creating buildings combined the terroirs of Taiwan through the imitation of styles. However, some scholars have expressed regret that very few regionalist architectural works that appeared in the 1980s can respond specifically to regional conditions and forms of construction. Instead, most of them are vocabulary-led representations. Third, during the 1990s, by the end of the period of martial law, community building gradually emerged, which made the object of Taiwan's architectural concern gradually extended to the folk and ethnic groups. In the Yilan area, there are many architects who care about the local environment, such as the Field office Architects. Compared with the hollow regionality of the passionate national spirits that emerged during the martial law period, the local practice of the architect team in Yilan can better link the real local environmental life and reflect the true regionality. In conclusion, with the local practice case of the huge construction team in Yilan area, this paper focuses on the Spatial Practice by the Field office Architects to explore the spatial representation of the space and the practical enlightenment in the process of modernist Chinese architecture development in Taiwan.Keywords: regionalism, modernism, Chinese architecture, political landscape, spatial representation
Procedia PDF Downloads 130601 Seismic Response of Reinforced Concrete Buildings: Field Challenges and Simplified Code Formulas
Authors: Michel Soto Chalhoub
Abstract:
Building code-related literature provides recommendations on normalizing approaches to the calculation of the dynamic properties of structures. Most building codes make a distinction among types of structural systems, construction material, and configuration through a numerical coefficient in the expression for the fundamental period. The period is then used in normalized response spectra to compute base shear. The typical parameter used in simplified code formulas for the fundamental period is overall building height raised to a power determined from analytical and experimental results. However, reinforced concrete buildings which constitute the majority of built space in less developed countries pose additional challenges to the ones built with homogeneous material such as steel, or with concrete under stricter quality control. In the present paper, the particularities of reinforced concrete buildings are explored and related to current methods of equivalent static analysis. A comparative study is presented between the Uniform Building Code, commonly used for buildings within and outside the USA, and data from the Middle East used to model 151 reinforced concrete buildings of varying number of bays, number of floors, overall building height, and individual story height. The fundamental period was calculated using eigenvalue matrix computation. The results were also used in a separate regression analysis where the computed period serves as dependent variable, while five building properties serve as independent variables. The statistical analysis shed light on important parameters that simplified code formulas need to account for including individual story height, overall building height, floor plan, number of bays, and concrete properties. Such inclusions are important for reinforced concrete buildings of special conditions due to the level of concrete damage, aging, or materials quality control during construction. Overall results of the present analysis show that simplified code formulas for fundamental period and base shear may be applied but they require revisions to account for multiple parameters. The conclusion above is confirmed by the analytical model where fundamental periods were computed using numerical techniques and eigenvalue solutions. This recommendation is particularly relevant to code upgrades in less developed countries where it is customary to adopt, and mildly adapt international codes. We also note the necessity of further research using empirical data from buildings in Lebanon that were subjected to severe damage due to impulse loading or accelerated aging. However, we excluded this study from the present paper and left it for future research as it has its own peculiarities and requires a different type of analysis.Keywords: seismic behaviour, reinforced concrete, simplified code formulas, equivalent static analysis, base shear, response spectra
Procedia PDF Downloads 232600 Piled Critical Size Bone-Biomimetic and Biominerizable Nanocomposites: Formation of Bioreactor-Induced Stem Cell Gradients under Perfusion and Compression
Authors: W. Baumgartner, M. Welti, N. Hild, S. C. Hess, W. J. Stark, G. Meier Bürgisser, P. Giovanoli, J. Buschmann
Abstract:
Perfusion bioreactors are used to solve problems in tissue engineering in terms of sufficient nutrient and oxygen supply. Such problems especially occur in critical size grafts because vascularization is often too slow after implantation ending up in necrotic cores. Biominerizable and biocompatible nanocomposite materials are attractive and suitable scaffold materials for bone tissue engineering because they offer mineral components in organic carriers – mimicking natural bone tissue. In addition, human adipose derived stem cells (ASCs) can potentially be used to increase bone healing as they are capable of differentiating towards osteoblasts or endothelial cells among others. In the present study, electrospun nanocomposite disks of poly-lactic-co-glycolic acid and amorphous calcium phosphate nanoparticles (PLGA/a-CaP) were seeded with human ASCs and eight disks were stacked in a bioreactor running with normal culture medium (no differentiation supplements). Under continuous perfusion and uniaxial cyclic compression, load-displacement curves as a function of time were assessed. Stiffness and energy dissipation were recorded. Moreover, stem cell densities in the layers of the piled scaffold were determined as well as their morphologies and differentiation status (endothelial cell differentiation, chondrogenesis and osteogenesis). While the stiffness of the cell free constructs increased over time caused by the transformation of the a-CaP nanoparticles into flake-like apatite, ASC-seeded constructs showed a constant stiffness. Stem cell density gradients were histologically determined with a linear increase in the flow direction from the bottom to the top of the 3.5 mm high pile (r2 > 0.95). Cell morphology was influenced by the flow rate, with stem cells getting more roundish at higher flow rates. Less than 1 % osteogenesis was found upon osteopontin immunostaining at the end of the experiment (9 days), while no endothelial cell differentiation and no chondrogenesis was triggered under these conditions. All ASCs had mainly remained in their original pluripotent status within this time frame. In summary, we have fabricated a critical size bone graft based on a biominerizable bone-biomimetic nanocomposite with preserved stiffness when seeded with human ASCs. The special feature of this bone graft was that ASC densities inside the piled construct varied with a linear gradient, which is a good starting point for tissue engineering interfaces such as bone-cartilage where the bone tissue is cell rich while the cartilage exhibits low cell densities. As such, this tissue-engineered graft may act as a bone-cartilage interface after the corresponding differentiation of the ASCs.Keywords: bioreactor, bone, cartilage, nanocomposite, stem cell gradient
Procedia PDF Downloads 308599 Urban Seismic Risk Reduction in Algeria: Adaptation and Application of the RADIUS Methodology
Authors: Mehdi Boukri, Mohammed Naboussi Farsi, Mounir Naili, Omar Amellal, Mohamed Belazougui, Ahmed Mebarki, Nabila Guessoum, Brahim Mezazigh, Mounir Ait-Belkacem, Nacim Yousfi, Mohamed Bouaoud, Ikram Boukal, Aboubakr Fettar, Asma Souki
Abstract:
The seismic risk to which the urban centres are more and more exposed became a world concern. A co-operation on an international scale is necessary for an exchange of information and experiments for the prevention and the installation of action plans in the countries prone to this phenomenon. For that, the 1990s was designated as 'International Decade for Natural Disaster Reduction (IDNDR)' by the United Nations, whose interest was to promote the capacity to resist the various natural, industrial and environmental disasters. Within this framework, it was launched in 1996, the RADIUS project (Risk Assessment Tools for Diagnosis of Urban Areas Against Seismic Disaster), whose the main objective is to mitigate seismic risk in developing countries, through the development of a simple and fast methodological and operational approach, allowing to evaluate the vulnerability as well as the socio-economic losses, by probable earthquake scenarios in the exposed urban areas. In this paper, we will present the adaptation and application of this methodology to the Algerian context for the seismic risk evaluation in urban areas potentially exposed to earthquakes. This application consists to perform an earthquake scenario in the urban centre of Constantine city, located at the North-East of Algeria, which will allow the building seismic damage estimation of this city. For that, an inventory of 30706 building units was carried out by the National Earthquake Engineering Research Centre (CGS). These buildings were digitized in a data base which comprises their technical information by using a Geographical Information system (GIS), and then they were classified according to the RADIUS methodology. The study area was subdivided into 228 meshes of 500m on side and Ten (10) sectors of which each one contains a group of meshes. The results of this earthquake scenario highlights that the ratio of likely damage is about 23%. This severe damage results from the high concentration of old buildings and unfavourable soil conditions. This simulation of the probable seismic damage of the building and the GIS damage maps generated provide a predictive evaluation of the damage which can occur by a potential earthquake near to Constantine city. These theoretical forecasts are important for decision makers in order to take the adequate preventive measures and to develop suitable strategies, prevention and emergency management plans to reduce these losses. They can also help to take the adequate emergency measures in the most impacted areas in the early hours and days after an earthquake occurrence.Keywords: seismic risk, mitigation, RADIUS, urban areas, Algeria, earthquake scenario, Constantine
Procedia PDF Downloads 262598 Precise Determination of the Residual Stress Gradient in Composite Laminates Using a Configurable Numerical-Experimental Coupling Based on the Incremental Hole Drilling Method
Authors: A. S. Ibrahim Mamane, S. Giljean, M.-J. Pac, G. L’Hostis
Abstract:
Fiber reinforced composite laminates are particularly subject to residual stresses due to their heterogeneity and the complex chemical, mechanical and thermal mechanisms that occur during their processing. Residual stresses are now well known to cause damage accumulation, shape instability, and behavior disturbance in composite parts. Many works exist in the literature on techniques for minimizing residual stresses in thermosetting and thermoplastic composites mainly. To study in-depth the influence of processing mechanisms on the formation of residual stresses and to minimize them by establishing a reliable correlation, it is essential to be able to measure very precisely the profile of residual stresses in the composite. Residual stresses are important data to consider when sizing composite parts and predicting their behavior. The incremental hole drilling is very effective in measuring the gradient of residual stresses in composite laminates. This method is semi-destructive and consists of drilling incrementally a hole through the thickness of the material and measuring relaxation strains around the hole for each increment using three strain gauges. These strains are then converted into residual stresses using a matrix of coefficients. These coefficients, called calibration coefficients, depending on the diameter of the hole and the dimensions of the gauges used. The reliability of the incremental hole drilling depends on the accuracy with which the calibration coefficients are determined. These coefficients are calculated using a finite element model. The samples’ features and the experimental conditions must be considered in the simulation. Any mismatch can lead to inadequate calibration coefficients, thus introducing errors on residual stresses. Several calibration coefficient correction methods exist for isotropic material, but there is a lack of information on this subject concerning composite laminates. In this work, a Python program was developed to automatically generate the adequate finite element model. This model allowed us to perform a parametric study to assess the influence of experimental errors on the calibration coefficients. The results highlighted the sensitivity of the calibration coefficients to the considered errors and gave an order of magnitude of the precisions required on the experimental device to have reliable measurements. On the basis of these results, improvements were proposed on the experimental device. Furthermore, a numerical method was proposed to correct the calibration coefficients for different types of materials, including thick composite parts for which the analytical approach is too complex. This method consists of taking into account the experimental errors in the simulation. Accurate measurement of the experimental errors (such as eccentricity of the hole, angular deviation of the gauges from their theoretical position, or errors on increment depth) is therefore necessary. The aim is to determine more precisely the residual stresses and to expand the validity domain of the incremental hole drilling technique.Keywords: fiber reinforced composites, finite element simulation, incremental hole drilling method, numerical correction of the calibration coefficients, residual stresses
Procedia PDF Downloads 132597 Functional Ingredients from Potato By-Products: Innovative Biocatalytic Processes
Authors: Salwa Karboune, Amanda Waglay
Abstract:
Recent studies indicate that health-promoting functional ingredients and nutraceuticals can help support and improve the overall public health, which is timely given the aging of the population and the increasing cost of health care. The development of novel ‘natural’ functional ingredients is increasingly challenging. Biocatalysis offers powerful approaches to achieve this goal. Our recent research has been focusing on the development of innovative biocatalytic approaches towards the isolation of protein isolates from potato by-products and the generation of peptides. Potato is a vegetable whose high-quality proteins are underestimated. In addition to their high proportion in the essential amino acids, potato proteins possess angiotensin-converting enzyme-inhibitory potency, an ability to reduce plasma triglycerides associated with a reduced risk of atherosclerosis, and stimulate the release of the appetite regulating hormone CCK. Potato proteins have long been considered not economically feasible due to the low protein content (27% dry matter) found in tuber (Solanum tuberosum). However, potatoes rank the second largest protein supplying crop grown per hectare following wheat. Potato proteins include patatin (40-45 kDa), protease inhibitors (5-25 kDa), and various high MW proteins. Non-destructive techniques for the extraction of proteins from potato pulp and for the generation of peptides are needed in order to minimize functional losses and enhance quality. A promising approach for isolating the potato proteins was developed, which involves the use of multi-enzymatic systems containing selected glycosyl hydrolase enzymes that synergistically work to open the plant cell wall network. This enzymatic approach is advantageous due to: (1) the use of milder reaction conditions, (2) the high selectivity and specificity of enzymes, (3) the low cost and (4) the ability to market natural ingredients. Another major benefit to this enzymatic approach is the elimination of a costly purification step; indeed, these multi-enzymatic systems have the ability to isolate proteins, while fractionating them due to their specificity and selectivity with minimal proteolytic activities. The isolated proteins were used for the enzymatic generation of active peptides. In addition, they were applied into a reduced gluten cookie formulation as consumers are putting a high demand for easy ready to eat snack foods, with high nutritional quality and limited to no gluten incorporation. The addition of potato protein significantly improved the textural hardness of reduced gluten cookies, more comparable to wheat flour alone. The presentation will focus on our recent ‘proof-of principle’ results illustrating the feasibility and the efficiency of new biocatalytic processes for the production of innovative functional food ingredients, from potato by-products, whose potential health benefits are increasingly being recognized.Keywords: biocatalytic approaches, functional ingredients, potato proteins, peptides
Procedia PDF Downloads 380596 Diversity of Rhopalocera in Different Vegetation Types of PC Hills, Philippines
Authors: Sean E. Gregory P. Igano, Ranz Brendan D. Gabor, Baron Arthur M. Cabalona, Numeriano Amer E. Gutierrez
Abstract:
Distribution patterns and abundance of butterflies respond in the long term to variations in habitat quality. Studying butterfly populations would give evidence on how vegetation types influence their diversity. In this research, the Rhopalocera diversity of PC Hills was assessed to provide information on diversity trends in varying vegetation types. PC Hills, located in Palo, Leyte, Philippines, is a relatively undisturbed area having forests and rivers. Despite being situated nearby inhabited villages; the area is observed to have a possible rich butterfly population. To assess the Rhopalocera species richness and diversity, transect sampling technique was applied to monitor and document butterflies. Transects were placed in locations that can be mapped, described and relocated easily. Three transects measuring three hundred meters each with a 5-meter diameter were established based on the different vegetation types present. The three main vegetation types identified were the agroecosystem (transect 1), dipterocarp forest (transect 2), and riparian (transect 3). Sample collections were done only from 9:00 A.M to 3:00 P.M. under warm and bright weather, with no more than moderate winds and when it was not raining. When weather conditions did not permit collection, it was moved to another day. A GPS receiver was used to record the location of the selected sample sites and the coordinates of where each sample was collected. Morphological analysis was done for the first phase of the study to identify the voucher specimen to the lowest taxonomic level possible using books about butterfly identification guides and species lists as references. For the second phase, DNA barcoding will be used to further identify the voucher specimen into the species taxonomic level. After eight (8) sampling sessions, seven hundred forty-two (742) individuals were seen, and twenty-two (22) Rhopalocera genera were identified through morphological identification. Nymphalidae family of genus Ypthima and the Pieridae family of genera Eurema and Leptosia were the most dominant species observed. Twenty (20) of the thirty-one (31) voucher specimen were already identified to their species taxonomic level using DNA Barcoding. Shannon-Weiner index showed that the highest diversity level was observed in the third transect (H’ = 2.947), followed by the second transect (H’ = 2.6317) and the lowest being in the first transect (H’ = 1.767). This indicates that butterflies are likely to inhabit dipterocarp and riparian vegetation types than agroecosystem, which influences their species composition and diversity. Moreover, the appearance of a river in the riparian vegetation supported its diversity value since butterflies have the tendency to fly into areas near rivers. Species identification of other voucher specimen will be done in order to compute the overall species richness in PC Hills. Further butterfly sampling sessions of PC Hills is recommended for a more reliable diversity trend and to discover more butterfly species. Expanding the research by assessing the Rhopalocera diversity in other locations should be considered along with studying factors that affect butterfly species composition other than vegetation types.Keywords: distribution patterns, DNA barcoding, morphological analysis, Rhopalocera
Procedia PDF Downloads 155