Search results for: open jet testing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5837

Search results for: open jet testing

347 Boost for Online Language Course through Peer Evaluation

Authors: Kirsi Korkealehto

Abstract:

The purpose of this research was to investigate how the peer evaluation concept was perceived by language teachers developing online language courses. The online language courses in question were developed in language teacher teams within a nationwide KiVAKO-project funded by the Finnish Ministry of Education and Culture. The participants of the project were 86 language teachers of 26 higher education institutions in Finland. The KiVAKO-project aims to strengthen the language capital at higher education institutions by building a nationwide online language course offering on a shared platform. All higher education students can study the courses regardless of their home institutions. The project covers the following languages: Chinese, Estonian, Finnish Sign Language, French, German, Italian, Japanese, Korean, Portuguese, Russian, and Spanish on the levels CEFR A1-C1. The courses were piloted in the autumn term of 2019, and an online peer evaluation session was organised for all project participating teachers in spring 2020. The peer evaluation utilised the quality criteria for online implementation, which was developed earlier within the eAMK-project. The eAMK-project was also funded by the Finnish Ministry of Education and Culture with the aim to improve higher education institution teachers’ digital and pedagogical competences. In the online peer evaluation session, the teachers were divided into Zoom breakout rooms, in each of which two pilot courses were presented by their teachers dialogically. The other language teachers provided feedback on the course on the basis of the quality criteria. Thereafter good practices and ideas were gathered to an online document. The breakout rooms were facilitated by one teacher who was instructed and provided a slide-set prior to the online session. After the online peer evaluation sessions, the language teachers were asked to respond to an online questionnaire for feedback. The questionnaire included three multiple-choice questions using the Likert-scale rating and two open-ended questions. The online questionnaire was answered after the sessions immediately, the questionnaire link and the QR-code to it was on the last slide of the session, and it was responded at the site. The data comprise online questionnaire responses of the peer evaluation session and the researcher’s observations during the sessions. The data were analysed with a qualitative content analysis method with the help of Atlas.ti programme, and the Likert scale answers provided results per se. The observations were used as complementary data to support the primary data. The findings indicate that the working in the breakout rooms was successful, and the workshops proceeded smoothly. The workshops were perceived as beneficial in terms of improving the piloted courses and developing the participants’ own work as teachers. Further, the language teachers stated that the collegial discussions and sharing the ideas were fruitful. The aspects to improve the workshops were to give more time for free discussions and the opportunity to familiarize oneself with the quality criteria and the presented language courses beforehand. The quality criteria were considered to provide a suitable frame for self- and peer evaluations.

Keywords: higher education, language learning, online learning, peer-evaluation

Procedia PDF Downloads 106
346 Reading and Writing Memories in Artificial and Human Reasoning

Authors: Ian O'Loughlin

Abstract:

Memory networks aim to integrate some of the recent successes in machine learning with a dynamic memory base that can be updated and deployed in artificial reasoning tasks. These models involve training networks to identify, update, and operate over stored elements in a large memory array in order, for example, to ably perform question and answer tasks parsing real-world and simulated discourses. This family of approaches still faces numerous challenges: the performance of these network models in simulated domains remains considerably better than in open, real-world domains, wide-context cues remain elusive in parsing words and sentences, and even moderately complex sentence structures remain problematic. This innovation, employing an array of stored and updatable ‘memory’ elements over which the system operates as it parses text input and develops responses to questions, is a compelling one for at least two reasons: first, it addresses one of the difficulties that standard machine learning techniques face, by providing a way to store a large bank of facts, offering a way forward for the kinds of long-term reasoning that, for example, recurrent neural networks trained on a corpus have difficulty performing. Second, the addition of a stored long-term memory component in artificial reasoning seems psychologically plausible; human reasoning appears replete with invocations of long-term memory, and the stored but dynamic elements in the arrays of memory networks are deeply reminiscent of the way that human memory is readily and often characterized. However, this apparent psychological plausibility is belied by a recent turn in the study of human memory in cognitive science. In recent years, the very notion that there is a stored element which enables remembering, however dynamic or reconstructive it may be, has come under deep suspicion. In the wake of constructive memory studies, amnesia and impairment studies, and studies of implicit memory—as well as following considerations from the cognitive neuroscience of memory and conceptual analyses from the philosophy of mind and cognitive science—researchers are now rejecting storage and retrieval, even in principle, and instead seeking and developing models of human memory wherein plasticity and dynamics are the rule rather than the exception. In these models, storage is entirely avoided by modeling memory using a recurrent neural network designed to fit a preconceived energy function that attains zero values only for desired memory patterns, so that these patterns are the sole stable equilibrium points in the attractor network. So although the array of long-term memory elements in memory networks seem psychologically appropriate for reasoning systems, they may actually be incurring difficulties that are theoretically analogous to those that older, storage-based models of human memory have demonstrated. The kind of emergent stability found in the attractor network models more closely fits our best understanding of human long-term memory than do the memory network arrays, despite appearances to the contrary.

Keywords: artificial reasoning, human memory, machine learning, neural networks

Procedia PDF Downloads 248
345 Evaluation of Feasibility of Ecological Sanitation in Central Nepal

Authors: K. C. Sharda

Abstract:

Introduction: In the world, almost half of the population are lacking proper access to improved sanitation services. In Nepal, large number of people are living without access to any sanitation facility. Ecological sanitation toilet which is defined as water conserving and nutrient recycling system for use of human urine and excreta in agriculture would count a lot to utilize locally available resources, to regenerate soil fertility, to save national currency and to achieve the goal of elimination open defecation in country like Nepal. The objectives of the research were to test the efficacy of human urine for improving crop performance and to evaluate the feasibility of ecological sanitation in rural area of Central Nepal. Materials and Methods: The field investigation was carried out at Palung Village Development Committee (VDC) of Makawanpur District, Nepal from March – August, 2016. Five eco-san toilets in two villages (Angare and Bhot Khoriya) were constructed and questionnaire survey was carried out. During the questionnaire survey, respondents were asked about socio-economic parameters, farming practices, awareness of ecological sanitation and fertilizer value of human urine and excreta in agriculture. In prior to a field experiment, soil was sampled for analysis of basic characteristics. In the field experiment, cauliflower was cultivated for a month in the two sites to compare the fertilizer value of urine with chemical fertilizer and no fertilizer with three replications. The harvested plant samples were analyzed to understand the nutrient content in plant with different treatments. Results and Discussion: Eighty three percent respondents were engaged in agriculture growing mainly vegetables, which may raise the feasibility of ecological sanitation. In the study area, water deficiencies in dry season, high demand of chemical fertilizer, lack of sanitation awareness were found to be solved. The soil at Angare has sandier texture and lower nitrogen content compared to that in Bhot Khoriya. While the field experiment in Angare showed that the aboveground biomass of cauliflower in the urine fertilized plot were similar with that in the chemically fertilized plot and higher than those in the non-fertilized plots, no significant difference among the treatments were found in Bhot Khoriya. The more distinctive response of crop growth to the three treatments in the former might be attributed to the poorer soil productivity, which in turn could be caused by the poorer inherent soil fertility and the poorer past management by the farmer in Angare. Thus, use of urine as fertilizer could help poor farmers with low quality soil. The significantly different content of nitrogen and potassium in the plant samples among three treatments in Bhot Khoriya would require further investigation. When urine is utilized as a fertilizer, the productivity could be increased and the money to buy chemical fertilizer would be utilized in other livelihood activities. Ecological sanitation is feasible in the area with similar socio-economic parameter.

Keywords: cauliflower, chemical fertilizer, ecological sanitation, Nepal, urine

Procedia PDF Downloads 337
344 A Post-Occupancy Evaluation of the Impact of Indoor Environmental Quality on Health and Well-Being in Office Buildings

Authors: Suyeon Bae, Abimbola Asojo, Denise Guerin, Caren Martin

Abstract:

Post-occupancy evaluations (POEs) have been recognized for documenting occupant well-being and responses to indoor environmental quality (IEQ) factors such as thermal, lighting, and acoustic conditions. Sustainable Post-Occupancy evaluation survey (SPOES) developed by an interdisciplinary team at a Midwest University provides an evidence-based quantitative analysis of occupants’ satisfaction in office, classroom, and residential spaces to help direct attention to successful areas and areas that need improvement in buildings. SPOES is a self-administered and Internet-based questionnaire completed by building occupants. In this study, employees in three different office buildings rated their satisfaction on a Likert-type scale about 12 IEQ criteria including thermal condition, indoor air quality, acoustic quality, daylighting, electric lighting, privacy, view conditions, furnishings, appearance, cleaning and maintenance, vibration and movement, and technology. Employees rated their level of satisfaction on a Likert-type scale from 1 (very dissatisfied) to 7 (very satisfied). They also rate the influence of their physical environment on their perception of their work performance and the impact of their primary workspaces on their health on a scale from 1 (hinders) to 7 (enhances). Building A is a three-story building that includes private and group offices, classrooms, and conference rooms and amounted to 55,000 square-feet for primary workplace (N=75). Building B, a six-story building, consisted of private offices, shared enclosed office, workstations, and open desk areas for employees and amounted to 14,193 square-feet (N=75). Building C is a three-story 56,000 square-feet building that included classrooms, therapy rooms, an outdoor playground, gym, restrooms, and training rooms for clinicians (N=76). The results indicated that 10 IEQs for Building A except acoustic quality and privacy showed statistically significant correlations on the impact of the primary workspace on health. In Building B, 11 IEQs except technology showed statistically significant correlations on the impact of the primary workspace on health. Building C had statistically significant correlations between all 12 IEQ and the employees’ perception of the impact of their primary workspace on their health in two-tailed correlations (P ≤ 0.05). Out of 33 statistically significant correlations, 25 correlations (76%) showed at least moderate relationship (r ≥ 0.35). For the three buildings, daylighting, furnishings, and indoor air quality IEQs ranked highest on the impact on health. IEQs about vibration and movement, view condition, and electric lighting ranked second, followed by IEQs about cleaning and maintenance and appearance. These results imply that 12 IEQs developed in SPOES are highly related to employees’ perception of how their primary workplaces impact their health. The IEQs in this study offer an opportunity for improving occupants’ well-being and the built environment.

Keywords: post-occupancy evaluation, built environment, sustainability, well-being, indoor air quality

Procedia PDF Downloads 263
343 A Player's Perspective of University Elite Netball Programmes in South Africa

Authors: Wim Hollander, Petrus Louis Nolte

Abstract:

University sport in South Africa is not isolated from the complexity of globalization and professionalization of sport, as it forms an integral part of the sports development environment in South Africa. In order to align their sports programs with global and professional requirements, several universities opted to develop elite sports programs; recruit specialized personnel such as coaches, administrators, and athletes; provide expert coaching; scientific and medical services; sports testing; fitness, technical and tactical expertise; sport psychological and rehabilitation support; academic guidance and career assistance; and student-athlete accommodation. In addition, universities provide administrative support and high-quality physical resources (training facilities) for the benefit of the overall South African sport system. Although it is not compulsory for universities to develop elite sports programs to prepare their teams for competitions, elite competitions such as the annual Varsity Sport, University Sport South Africa (USSA) and local club competitions and leagues within international university competitions where universities not only compete but also deliver players for representative national netball teams. The aim of this study is, therefore, to describe the perceptions of players of the university elite netball programs they were participating in. This study adopted a descriptive design with a quantitative approach, utilizing a self-structured questionnaire as a research technique. As this research formed part of a national research project for NSA with a population of 172 national and provincial netball players, a sample of 92 university netball players from the population was selected. Content validity of the self-structured questionnaire was secured through a test-retest process, with construct validity through a member of the Statistical Consultation Services (STATCON) of the University of Johannesburg that provided feedback on the structural format of the questionnaire. Reliability was measured utilizing Cronbach Alpha on p < 0.005 level of significance. A reliability score of 0.87 was measured. The research was approved by the Board of Netball South Africa and ethical conduct implemented according to the processes and procedures approved by the Ethics Committees of the Faculty of Health Sciences, the University of Johannesburg with clearance number REC-01-30-2019. From the results, it is evident that university elite netball programs are professional, especially with regards to the employment of knowledgeable and competent coaches and technical officials such as team managers and sport sciences staff. These professionals have access to elite training facilities, support staff, and relatively large groups of elite players, all elements of an elite program that could enhance the national federation’s (Netball South Africa) system. Universities could serve the dual purpose of serving as university netball clubs, as well as providing elite training services and facilities as performance hubs for national players.

Keywords: elite sport programmes, university netball, player experiences, varsity sport netball

Procedia PDF Downloads 146
342 Exploration of Barriers and Challenges to Innovation Process for SMEs: Possibilities to Promote Cooperation Between Scientific and Business Institutions to Address it

Authors: Indre Brazauskaite, Vilte Auruskeviciene

Abstract:

Significance of the study is outlined through current strategic management challenges faced by SMEs. First, innovation is recognized as competitive advantage in the market, having ever changing market conditions. It is of constant interest from both practitioners and academics to capture and capitalize on business opportunities or mitigate the foreseen risks. Secondly, it is recognized that integrated system is needed for proper implementation of innovation process, especially during the period of business incubation, associated with relatively high risks of new product failure. Finally, ability to successful commercialize innovations leads to tangible business results that allow to grow organizations further. This is particularly relevant to SMEs due to limited structures, resources, or capabilities. Cooperation between scientific and business institutions could be a tool of mutual interest to observe, address, and further develop innovations during the incubation period, which is the most demanding and challenging during the innovation process. Material aims to address the following problematics: i) indicate the major barriers and challenges in innovation process that SMEs are facing, ii) outline the possibilities for these barriers and challenges to be addressed by cooperation between scientific and business institutions. Basis for this research is stage-by-stage integrated innovation management process which presents existing challenges and needed aid in operational decision making. The stage-by-stage innovation management process exploration highlights relevant research opportunities that have high practical relevance in the field. It is expected to reveal the possibility for business incubation programs that could combine interest from both – practices and academia. Methodology. Scientific meta-analysis of to-date scientific literature that explores innovation process. Research model is built on the combination of stage-gate model and lean six sigma approach. It outlines the following steps: i) pre-incubation (discovery and screening), ii) incubation (scoping, planning, development, and testing), and iii) post-incubation (launch and commercialization) periods. Empirical quantitative research is conducted to address barriers and challenges related to innovation process among SMEs that limits innovations from successful launch and commercialization and allows to identify potential areas for cooperation between scientific and business institutions. Research sample, high level decision makers representing trading SMEs, are approached with structured survey based on the research model to investigate the challenges associated with each of the innovation management step. Expected findings. First, the current business challenges in the innovation process are revealed. It will outline strengths and weaknesses of innovation management practices and systems across SMEs. Secondly, it will present material for relevant business case investigation for scholars to serve as future research directions. It will contribute to a better understanding of quality innovation management systems. Third, it will contribute to the understanding the need for business incubation systems for mutual contribution from practices and academia. It can increase relevance and adaptation of business research.

Keywords: cooperation between scientific and business institutions, innovation barriers and challenges, innovation measure, innovation process, SMEs

Procedia PDF Downloads 129
341 Automated Prediction of HIV-associated Cervical Cancer Patients Using Data Mining Techniques for Survival Analysis

Authors: O. J. Akinsola, Yinan Zheng, Rose Anorlu, F. T. Ogunsola, Lifang Hou, Robert Leo-Murphy

Abstract:

Cervical Cancer (CC) is the 2nd most common cancer among women living in low and middle-income countries, with no associated symptoms during formative periods. With the advancement and innovative medical research, there are numerous preventive measures being utilized, but the incidence of cervical cancer cannot be truncated with the application of only screening tests. The mortality associated with this invasive cervical cancer can be nipped in the bud through the important role of early-stage detection. This study research selected an array of different top features selection techniques which was aimed at developing a model that could validly diagnose the risk factors of cervical cancer. A retrospective clinic-based cohort study was conducted on 178 HIV-associated cervical cancer patients in Lagos University teaching Hospital, Nigeria (U54 data repository) in April 2022. The outcome measure was the automated prediction of the HIV-associated cervical cancer cases, while the predictor variables include: demographic information, reproductive history, birth control, sexual history, cervical cancer screening history for invasive cervical cancer. The proposed technique was assessed with R and Python programming software to produce the model by utilizing the classification algorithms for the detection and diagnosis of cervical cancer disease. Four machine learning classification algorithms used are: the machine learning model was split into training and testing dataset into ratio 80:20. The numerical features were also standardized while hyperparameter tuning was carried out on the machine learning to train and test the data. Logistic Regression (LR), Decision Tree (DT), Random Forest (RF), and K-Nearest Neighbor (KNN). Some fitting features were selected for the detection and diagnosis of cervical cancer diseases from selected characteristics in the dataset using the contribution of various selection methods for the classification cervical cancer into healthy or diseased status. The mean age of patients was 49.7±12.1 years, mean age at pregnancy was 23.3±5.5 years, mean age at first sexual experience was 19.4±3.2 years, while the mean BMI was 27.1±5.6 kg/m2. A larger percentage of the patients are Married (62.9%), while most of them have at least two sexual partners (72.5%). Age of patients (OR=1.065, p<0.001**), marital status (OR=0.375, p=0.011**), number of pregnancy live-births (OR=1.317, p=0.007**), and use of birth control pills (OR=0.291, p=0.015**) were found to be significantly associated with HIV-associated cervical cancer. On top ten 10 features (variables) considered in the analysis, RF claims the overall model performance, which include: accuracy of (72.0%), the precision of (84.6%), a recall of (84.6%) and F1-score of (74.0%) while LR has: an accuracy of (74.0%), precision of (70.0%), recall of (70.0%) and F1-score of (70.0%). The RF model identified 10 features predictive of developing cervical cancer. The age of patients was considered as the most important risk factor, followed by the number of pregnancy livebirths, marital status, and use of birth control pills, The study shows that data mining techniques could be used to identify women living with HIV at high risk of developing cervical cancer in Nigeria and other sub-Saharan African countries.

Keywords: associated cervical cancer, data mining, random forest, logistic regression

Procedia PDF Downloads 62
340 Volatility Index, Fear Sentiment and Cross-Section of Stock Returns: Indian Evidence

Authors: Pratap Chandra Pati, Prabina Rajib, Parama Barai

Abstract:

The traditional finance theory neglects the role of sentiment factor in asset pricing. However, the behavioral approach to asset-pricing based on noise trader model and limit to arbitrage includes investor sentiment as a priced risk factor in the assist pricing model. Investor sentiment affects stock more that are vulnerable to speculation, hard to value and risky to arbitrage. It includes small stocks, high volatility stocks, growth stocks, distressed stocks, young stocks and non-dividend-paying stocks. Since the introduction of Chicago Board Options Exchange (CBOE) volatility index (VIX) in 1993, it is used as a measure of future volatility in the stock market and also as a measure of investor sentiment. CBOE VIX index, in particular, is often referred to as the ‘investors’ fear gauge’ by public media and prior literature. The upward spikes in the volatility index are associated with bouts of market turmoil and uncertainty. High levels of the volatility index indicate fear, anxiety and pessimistic expectations of investors about the stock market. On the contrary, low levels of the volatility index reflect confident and optimistic attitude of investors. Based on the above discussions, we investigate whether market-wide fear levels measured volatility index is priced factor in the standard asset pricing model for the Indian stock market. First, we investigate the performance and validity of Fama and French three-factor model and Carhart four-factor model in the Indian stock market. Second, we explore whether India volatility index as a proxy for fearful market-based sentiment indicators affect the cross section of stock returns after controlling for well-established risk factors such as market excess return, size, book-to-market, and momentum. Asset pricing tests are performed using monthly data on CNX 500 index constituent stocks listed on the National stock exchange of India Limited (NSE) over the sample period that extends from January 2008 to March 2017. To examine whether India volatility index, as an indicator of fear sentiment, is a priced risk factor, changes in India VIX is included as an explanatory variable in the Fama-French three-factor model as well as Carhart four-factor model. For the empirical testing, we use three different sets of test portfolios used as the dependent variable in the in asset pricing regressions. The first portfolio set is the 4x4 sorts on the size and B/M ratio. The second portfolio set is the 4x4 sort on the size and sensitivity beta of change in IVIX. The third portfolio set is the 2x3x2 independent triple-sorting on size, B/M and sensitivity beta of change in IVIX. We find evidence that size, value and momentum factors continue to exist in Indian stock market. However, VIX index does not constitute a priced risk factor in the cross-section of returns. The inseparability of volatility and jump risk in the VIX is a possible explanation of the current findings in the study.

Keywords: India VIX, Fama-French model, Carhart four-factor model, asset pricing

Procedia PDF Downloads 228
339 Fabrication of All-Cellulose Composites from End-of-Life Textiles

Authors: Behnaz Baghaei, Mikael Skrifvars

Abstract:

Sustainability is today a trend that is seen everywhere, with no exception for the textiles 31 industry. However, there is a rather significant downside regarding how the textile industry currently operates, namely the huge amount of end-of-life textiles coming along with it. Approximately 73% of the 53 million tonnes of fibres used annually for textile production is landfilled or incinerated, while only 12% is recycled as secondary products. Mechanical recycling of end-of-life textile fabrics into yarns and fabrics was before very common, but due to the low costs for virgin man-made fibres, the current textile material composition diversity, the fibre material quality variations and the high recycling costs this route is not feasible. Another way to decrease the ever-growing pile of textile waste is to repurpose the textile. If a feasible methodology can be found to reuse end-of life textiles as secondary market products including a manufacturing process that requires rather low investment costs, then this can be highly beneficial to counteract the increasing textile waste volumes. In structural composites, glass fibre textiles are used as reinforcements, but today there is a growing interest in biocomposites where the reinforcement and/or the resin are from a biomass resource. All-cellulose composites (ACCs) are monocomponent or single polymer composites, and they are entirely made from cellulose, ideally leading to a homogeneous biocomposite. Since the matrix and the reinforcement are both made from cellulose, and therefore chemically identical, they are fully compatible with each other which allow efficient stress transfer and adhesion at their interface. Apart from improving the mechanical performance of the final products, the recycling of the composites will be facilitated. This paper reports the recycling of end-of-life cellulose containing textiles by fabrication of all-cellulose composites (ACCs). Composite laminates were prepared by using an ionic liquid (IL) in a hot process, involving a partial dissolving of the cellulose fibres. Discharged denim fabrics were used as the reinforcement while dissolved cellulose from two different cellulose resources was used as the matrix phase. Virgin cotton staple fibres and recovered cotton from polyester/cotton (polycotton) waste fabrics were used to form the matrix phase. The process comprises the dissolving 6 wt.% cellulose solution in the ionic liquid 1-butyl-3-methyl imidazolium acetate ([BMIM][Ac]), this solution acted as a precursor for the matrix component. The denim fabrics were embedded in the cellulose/IL solution after which laminates were formed, which also involved removal of the IL by washing. The effect of reuse of the recovered IL was also investigated. The mechanical properties of the obtained ACCs were determined regarding tensile, impact and flexural properties. Mechanical testing revealed that there are no clear differences between the values measured for mechanical strength and modulus of the manufactured ACCs from denim/cotton-fresh IL, denim/recovered cotton-fresh IL and denim/cotton-recycled IL. This could be due to the low weight fraction of the cellulose matrix in the final ACC laminates and presumably the denim as cellulose reinforcement strongly influences and dominates the mechanical properties. Fabricated ACC composite laminates were further characterized regarding scanning electron microscopy.

Keywords: all-cellulose composites, denim fabrics, ionic liquid, mechanical properties

Procedia PDF Downloads 95
338 Electricity Market Reforms Towards Clean Energy Transition andnd Their Impact in India

Authors: Tarun Kumar Dalakoti, Debajyoti Majumder, Aditya Prasad Das, Samir Chandra Saxena

Abstract:

India’s ambitious target to achieve a 50 percent share of energy from non-fossil fuels and the 500-gigawatt (GW) renewable energy capacity before the deadline of 2030, coupled with the global pursuit of sustainable development, will compel the nation to embark on a rapid clean energy transition. As a result, electricity market reforms will emerge as critical policy instruments to facilitate this transition and achieve ambitious environmental targets. This paper will present a comprehensive analysis of the various electricity market reforms to be introduced in the Indian Electricity sector to facilitate the integration of clean energy sources and will assess their impact on the overall energy landscape. The first section of this paper will delve into the policy mechanisms to be introduced by the Government of India and the Central Electricity Regulatory Commission to promote clean energy deployment. These mechanisms include extensive provisions for the integration of renewables in the Indian Electricity Grid Code, 2023. The section will also cover the projection of RE Generation as highlighted in the National Electricity Plan, 2023. It will discuss the introduction of Green Energy Market segments, the waiver of Inter-State Transmission System (ISTS) charges for inter-state sale of solar and wind power, the notification of Promoting Renewable Energy through Green Energy Open Access Rules, and the bundling of conventional generating stations with renewable energy sources. The second section will evaluate the tangible impact of these electricity market reforms. By drawing on empirical studies and real-world case examples, the paper will assess the penetration rate of renewable energy sources in India’s electricity markets, the decline of conventional fuel-based generation, and the consequent reduction in carbon emissions. Furthermore, it will explore the influence of these reforms on electricity prices, the impact on various market segments due to the introduction of green contracts, and grid stability. The paper will also discuss the operational challenges to be faced due to the surge of RE Generation sources as a result of the implementation of the above-mentioned electricity market reforms, including grid integration issues, intermittency concerns with renewable energy sources, and the need for increasing grid resilience for future high RE in generation mix scenarios. In conclusion, this paper will emphasize that electricity market reforms will be pivotal in accelerating the global transition towards clean energy systems. It will underscore the importance of a holistic approach that combines effective policy design, robust regulatory frameworks, and active participation from market actors. Through a comprehensive examination of the impact of these reforms, the paper will shed light on the significance of India’s sustained commitment to a cleaner, more sustainable energy future.

Keywords: renewables, Indian electricity grid code, national electricity plan, green energy market

Procedia PDF Downloads 19
337 Adolescents’ Reports of Dating Abuse: Mothers’ Responses

Authors: Beverly Black

Abstract:

Background: Adolescent dating abuse (ADA) is widespread throughout the world and negatively impacts many adolescents. ADA is associated with lower self-esteem, poorer school performance, lower employment opportunities, higher rates of depression, absenteeism from school, substance abuse, bullying, smoking, suicide, pregnancy, eating disorders, and risky sexual behaviors, and experiencing domestic violence later in life. ADA prevention is sometimes addressed through school programming; yet, parental responses to ADA can also be an important vehicle for its prevention. In this exploratory study, the author examined how mothers, including abused mothers, responded to scenarios of ADA involving their children. Methods: Six focus groups were conducted between December, 2013 and June, 2014 with mothers (n=31) in the southern part of the United States. Three of the focus groups were comprised of mothers (n=17) who had been abused by their partners. Mothers were recruited from local community family agencies. Participants were provided a series of four scenarios about ADA and they were asked to explain how they would respond. Focus groups lasted approximately 45 minutes. All participants were given a gift card to a major retailer as a ‘thank you’. Using QSR-N10, two researchers’ analyzed the focus group data first using open and axial coding techniques to find overarching themes. Researchers triangulated the coded data to ensure accurate interpretations of the participants’ messages and used the scenario questions to structure the coded results. Results: Almost 30% of 699 comments coded as mothers’ recommendations for responding to ADA focused on the importance of providing advice to their children. Advice included breaking up, going to police, ignoring or avoiding the abusive partner, and setting boundaries in relationships. About 22% of comments focused on the need for educating teens about healthy and unhealthy relationships and seeking additional information. About 13% of the comments reflected the view that parents should confront abuser and/or abusers’ parents, and less than 2% noted the need to take their child to counseling. Mothers who had been abused offered similar responses as parents who had not experienced abuse. However, their responses were more likely to focus on sharing their own experience exercising caution in their responses, as they knew from their own experiences that authoritarian responses were ineffective. Over half of the comments indicated that parents would react stronger, quicker, and angrier if a girl was being abused by a boy than vice versa; parents expressed greater fear for their daughters than their sons involved in ADA. Conclusions. Results suggest that mothers have ideas about how to respond to ADA. Mothers who have been abused draw from their experiences and are aware that responding in an authoritarian manner may not be helpful. Because parental influence on teens is critical in their development, it is important for all parents to respond to ADA in a helpful manner to break the cycle of violence. Understanding responses to ADA can inform prevention programming to work with parents in responding to ADA.

Keywords: abused mothers' responses to dating abuse, adolescent dating abuse, mothers' responses to dating abuse, teen dating violence

Procedia PDF Downloads 200
336 Biomass Waste-To-Energy Technical Feasibility Analysis: A Case Study for Processing of Wood Waste in Malta

Authors: G. A. Asciak, C. Camilleri, A. Rizzo

Abstract:

The waste management in Malta is a national challenge. Coupled with Malta’s recent economic boom, which has seen massive growth in several sectors, especially the construction industry, drastic actions need to be taken. Wood waste, currently being dumped in landfills, is one type of waste which has increased astronomically. This research study aims to carry out a thorough examination on the possibility of using this waste as a biomass resource and adopting a waste-to-energy technology in order to generate electrical energy. This study is composed of three distinct yet interdependent phases, namely, data collection from the local SMEs, thermal analysis using the bomb calorimeter, and generation of energy from wood waste using a micro biomass plant. Data collection from SMEs specializing in wood works was carried out to obtain information regarding the available types of wood waste, the annual weight of imported wood, and to analyse the manner in which wood shavings are used after wood is manufactured. From this analysis, it resulted that five most common types of wood available in Malta which would suitable for generating energy are Oak (hardwood), Beech (hardwood), Red Beech (softwood), African Walnut (softwood) and Iroko (hardwood). Subsequently, based on the information collected, a thermal analysis using a 6200 Isoperibol calorimeter on the five most common types of wood was performed. This analysis was done so as to give a clear indication with regards to the burning potential, which will be valuable when testing the wood in the biomass plant. The experiments carried out in this phase provided a clear indication that the African Walnut generated the highest gross calorific value. This means that this type of wood released the highest amount of heat during the combustion in the calorimeter. This is due to the high presence of extractives and lignin, which accounts for a slightly higher gross calorific value. This is followed by Red Beech and Oak. Moreover, based on the findings of the first phase, both the African Walnut and Red Beech are highly imported in the Maltese Islands for use in various purposes. Oak, which has the third highest gross calorific value is the most imported and common wood used. From the five types of wood, three were chosen for use in the power plant on the basis of their popularity and their heating values. The PP20 biomass plant was used to burn the three types of shavings in order to compare results related to the estimated feedstock consumed by the plant, the high temperatures generated, the time taken by the plant to produce gasification temperatures, and the projected electrical power attributed to each wood type. From the experiments, it emerged that whilst all three types reached the required gasification temperature and thus, are feasible for electrical energy generation. African Walnut was deemed to be the most suitable fast-burning fuel. This is followed by Red-beech and Oak, which required a longer period of time to reach the required gasification temperatures. The results obtained provide a clear indication that wood waste can not only be treated instead of being dumped in dumped in landfill but coupled.

Keywords: biomass, isoperibol calorimeter, waste-to-energy technology, wood

Procedia PDF Downloads 215
335 i-Plastic: Surface and Water Column Microplastics From the Coastal North Eastern Atlantic (Portugal)

Authors: Beatriz Rebocho, Elisabete Valente, Carla Palma, Andreia Guilherme, Filipa Bessa, Paula Sobral

Abstract:

The global accumulation of plastic in the oceans is a growing problem. Plastic is transported from its source to the oceans via rivers, which are considered the main route for plastic particles from land-based sources to the ocean. These plastics undergo physical and chemical degradation resulting in microplastics. The i-Plastic project aims to understand and predict the dispersion, accumulation and impacts of microplastics (5 mm to 1 µm) and nano plastics (below 1 µm) in marine environments from the tropical and temperate land-ocean interface to the open ocean under distinct flow and climate regimes. Seasonal monitoring of the fluxes of microplastics was carried out in (three) coastal areas in Brazil, Portugal and Spain. The present work shows the first results of in-situ seasonal monitoring and mapping of microplastics in ocean waters between Ovar and Vieira de Leiria (Portugal), in which 43 surface water samples and 43 water column samples were collected in contrasting seasons (spring and autumn). The spring and autumn surface water samples were collected with a 300 µm and 150 µm pore neuston net, respectively. In both campaigns, water column samples were collected using a conical mesh with a 150 µm pore. The experimental procedure comprises the following steps: i) sieving by a metal sieve; ii) digestion with potassium hydroxide to remove the organic matter original from the sample matrix. After a filtration step, the content is retained on a membrane and observed under a stereomicroscope, and physical and chemical characterization (type, color, size, and polymer composition) of the microparticles is performed. Results showed that 84% and 88% of the surface water and water column samples were contaminated with microplastics, respectively. Surface water samples collected during the spring campaign averaged 0.35 MP.m-3, while surface water samples collected during autumn recorded 0.39 MP.m-3. Water column samples from the spring campaign had an average of 1.46 MP.m-3, while those from the autumn recorded 2.54 MP.m-3. In the spring, all microplastics found were fibers, predominantly black and blue. In autumn, the dominant particles found in the surface waters were fibers, while in the water column, fragments were dominant. In spring, the average size of surface water particles was 888 μm, while in the water column was 1063 μm. In autumn, the average size of surface and water column microplastics was 1333 μm and 1393 μm, respectively. The main polymers identified by Attenuated Total Reflectance (ATR) and micro-ATR Fourier Transform Infrared (FTIR) spectroscopy from all samples were low-density polyethylene (LDPE), polypropylene (PP), polyethylene terephthalate (PET), and polyvinyl chloride (PVC). The significant difference between the microplastic concentration in the water column between the two campaigns could be due to the remixing of the water masses that occurred that week due to the occurrence of a storm. This work presents preliminary results since the i-Plastic project is still in progress. These results will contribute to the understanding of the spatial and temporal dispersion and accumulation of microplastics in this marine environment.

Keywords: microplastics, Portugal, Atlantic Ocean, water column, surface water

Procedia PDF Downloads 54
334 Previously Undescribed Cardiac Abnormalities in Two Unrelated Autistic Males with Causative Variants in CHD8

Authors: Mariia A. Parfenenko, Ilya S. Dantsev, Sergei V. Bochenkov, Natalia V. Vinogradova, Olga S. Groznova, Victoria Yu. Voinova

Abstract:

Introduction: Autism is the most common neurodevelopmental disorder. Autism is characterized by difficulties in social interaction and adherence to stereotypic behavioral patterns and frequently co-occurs with epilepsy, intellectual disabilities, connective tissue disorders, and other conditions. CHD8 codes for chromodomain-helicase-DNA-binding protein 8 - a chromatin remodeler that regulates cellular proliferation and neurodevelopment in embryogenesis. CHD8 is one of the genes most frequently involved in autism. Patients and methods: 2 unrelated male patients, P3 and P12, aged 3 and 12 years old, underwent whole genome sequencing, which determined that they both had different likely pathogenic variants, both previously undescribed in literature. Sanger sequencing later determined that P12 inherited the variant from his affected mother. Results: P3 and P12 presented with autism, a developmental delay, ataxia, sleep disorders, overgrowth, and macrocephaly, as well as other clinical features typically present in patients with causative variants in CHD8. The mother of P12 also has autistic traits, as well as ataxia, hypotonia, sleep disorders, and other symptoms. However, P3 and P12 also have different cardiac abnormalities. P3 had signs of a repolarization disorder: a flattened T wave in the III and aVF derivations and a negative T wave in the V1-V2 derivations. He also had structural valve anomalies with associated regurgitation, local contractility impairment of the left ventricular, and diastolic dysfunction of the right ventricle. Meanwhile, P12 had Wolff-Parkinson-White syndrome and underwent radiofrequency ablation at the age of 2 years. At the time of observation, P12 had mild sinus arrhythmia and an incomplete right bundle branch block, as well as arterial hypertension. Discussion: Cardiac abnormalities were not previously reported in patients with causative variants in CHD8. The underlying mechanism for the formation of those abnormalities is currently unknown. However, the two hypotheses are either a disordered interaction with CHD7 – another chromodomain remodeler known to be directly involved in the cardiophenotype of CHARGE syndrome – a rare condition characterized by coloboma, heart defects and growth abnormalities, or the disrupted functioning of CHD8 as an A-Kinase Anchoring Protein, which are known to modulate cardiac function. Conclusion: We observed 2 unrelated autistic males with likely pathogenic variants in CHD8 that presented with typical symptoms of CHD8-related neurodevelopmental disorder, as well as cardiac abnormalities. Cardiac abnormalities have, until now, been considered uncharacteristic for patients with causative variants in CHD8. Further accumulation of data, including experimental evidence of the involvement of CHD8 in heart formation, will elucidate the mechanism underlying the cardiophenotype of those patients. Acknowledgements: Molecular genetic testing of the patients was made possible by the Charity Fund for medical and social genetic aid projects «Life Genome.»

Keywords: autism spectrum disorders, chromodomain-helicase-DNA-binding protein 8, neurodevelopmental disorder, cardio phenotype

Procedia PDF Downloads 67
333 Comparison between Bernardi’s Equation and Heat Flux Sensor Measurement as Battery Heat Generation Estimation Method

Authors: Marlon Gallo, Eduardo Miguel, Laura Oca, Eneko Gonzalez, Unai Iraola

Abstract:

The heat generation of an energy storage system is an essential topic when designing a battery pack and its cooling system. Heat generation estimation is used together with thermal models to predict battery temperature in operation and adapt the design of the battery pack and the cooling system to these thermal needs guaranteeing its safety and correct operation. In the present work, a comparison between the use of a heat flux sensor (HFS) for indirect measurement of heat losses in a cell and the widely used and simplified version of Bernardi’s equation for estimation is presented. First, a Li-ion cell is thermally characterized with an HFS to measure the thermal parameters that are used in a first-order lumped thermal model. These parameters are the equivalent thermal capacity and the thermal equivalent resistance of a single Li-ion cell. Static (when no current is flowing through the cell) and dynamic (making current flow through the cell) tests are conducted in which HFS is used to measure heat between the cell and the ambient, so thermal capacity and resistances respectively can be calculated. An experimental platform records current, voltage, ambient temperature, surface temperature, and HFS output voltage. Second, an equivalent circuit model is built in a Matlab-Simulink environment. This allows the comparison between the generated heat predicted by Bernardi’s equation and the HFS measurements. Data post-processing is required to extrapolate the heat generation from the HFS measurements, as the sensor records the heat released to the ambient and not the one generated within the cell. Finally, the cell temperature evolution is estimated with the lumped thermal model (using both HFS and Bernardi’s equation total heat generation) and compared towards experimental temperature data (measured with a T-type thermocouple). At the end of this work, a critical review of the results obtained and the possible mismatch reasons are reported. The results show that indirectly measuring the heat generation with HFS gives a more precise estimation than Bernardi’s simplified equation. On the one hand, when using Bernardi’s simplified equation, estimated heat generation differs from cell temperature measurements during charges at high current rates. Additionally, for low capacity cells where a small change in capacity has a great influence on the terminal voltage, the estimated heat generation shows high dependency on the State of Charge (SoC) estimation, and therefore open circuit voltage calculation (as it is SoC dependent). On the other hand, with indirect measuring the heat generation with HFS, the resulting error is a maximum of 0.28ºC in the temperature prediction, in contrast with 1.38ºC with Bernardi’s simplified equation. This illustrates the limitations of Bernardi’s simplified equation for applications where precise heat monitoring is required. For higher current rates, Bernardi’s equation estimates more heat generation and consequently, a higher predicted temperature. Bernardi´s equation accounts for no losses after cutting the charging or discharging current. However, HFS measurement shows that after cutting the current the cell continues generating heat for some time, increasing the error of Bernardi´s equation.

Keywords: lithium-ion battery, heat flux sensor, heat generation, thermal characterization

Procedia PDF Downloads 348
332 Development and Evaluation of Economical Self-cleaning Cement

Authors: Anil Saini, Jatinder Kumar Ratan

Abstract:

Now a day, the key issue for the scientific community is to devise the innovative technologies for sustainable control of urban pollution. In urban cities, a large surface area of the masonry structures, buildings, and pavements is exposed to the open environment, which may be utilized for the control of air pollution, if it is built from the photocatalytically active cement-based constructional materials such as concrete, mortars, paints, and blocks, etc. The photocatalytically active cement is formulated by incorporating a photocatalyst in the cement matrix, and such cement is generally known as self-cleaning cement In the literature, self-cleaning cement has been synthesized by incorporating nanosized-TiO₂ (n-TiO₂) as a photocatalyst in the formulation of the cement. However, the utilization of n-TiO₂ for the formulation of self-cleaning cement has the drawbacks of nano-toxicity, higher cost, and agglomeration as far as the commercial production and applications are concerned. The use of microsized-TiO₂ (m-TiO₂) in place of n-TiO₂ for the commercial manufacture of self-cleaning cement could avoid the above-mentioned problems. However, m-TiO₂ is less photocatalytically active as compared to n- TiO₂ due to smaller surface area, higher band gap, and increased recombination rate. As such, the use of m-TiO₂ in the formulation of self-cleaning cement may lead to a reduction in photocatalytic activity, thus, reducing the self-cleaning, depolluting, and antimicrobial abilities of the resultant cement material. So improvement in the photoactivity of m-TiO₂ based self-cleaning cement is the key issue for its practical applications in the present scenario. The current work proposes the use of surface-fluorinated m-TiO₂ for the formulation of self-cleaning cement to enhance its photocatalytic activity. The calcined dolomite, a constructional material, has also been utilized as co-adsorbent along with the surface-fluorinated m-TiO₂ in the formulation of self-cleaning cement to enhance the photocatalytic performance. The surface-fluorinated m-TiO₂, calcined dolomite, and the formulated self-cleaning cement were characterized using diffuse reflectance spectroscopy (DRS), X-ray diffraction analysis (XRD), field emission-scanning electron microscopy (FE-SEM), energy dispersive x-ray spectroscopy (EDS), X-ray photoelectron spectroscopy (XPS), scanning electron microscopy (SEM), BET (Brunauer–Emmett–Teller) surface area, and energy dispersive X-ray fluorescence spectrometry (EDXRF). The self-cleaning property of the as-prepared self-cleaning cement was evaluated using the methylene blue (MB) test. The depolluting ability of the formulated self-cleaning cement was assessed through a continuous NOX removal test. The antimicrobial activity of the self-cleaning cement was appraised using the method of the zone of inhibition. The as-prepared self-cleaning cement obtained by uniform mixing of 87% clinker, 10% calcined dolomite, and 3% surface-fluorinated m-TiO₂ showed a remarkable self-cleaning property by providing 53.9% degradation of the coated MB dye. The self-cleaning cement also depicted a noteworthy depolluting ability by removing 5.5% of NOx from the air. The inactivation of B. subtiltis bacteria in the presence of light confirmed the significant antimicrobial property of the formulated self-cleaning cement. The self-cleaning, depolluting, and antimicrobial results are attributed to the synergetic effect of surface-fluorinated m-TiO₂ and calcined dolomite in the cement matrix. The present study opens an idea and route for further research for acile and economical formulation of self-cleaning cement.

Keywords: microsized-titanium dioxide (m-TiO₂), self-cleaning cement, photocatalysis, surface-fluorination

Procedia PDF Downloads 141
331 Carbon Nanotubes Functionalization via Ullmann-Type Reactions Yielding C-C, C-O and C-N Bonds

Authors: Anna Kolanowska, Anna Kuziel, Sławomir Boncel

Abstract:

Carbon nanotubes (CNTs) represent a combination of lightness and nanoscopic size with high tensile strength, excellent thermal and electrical conductivity. By now, CNTs have been used as a support in heterogeneous catalysis (CuCl anchored to pre-functionalized CNTs) in the Ullmann-type coupling with aryl halides toward formation of C-N and C-O bonds. The results indicated that the stability of the catalyst was much improved and the elaborated catalytic system was efficient and recyclable. However, CNTs have not been considered as the substrate itself in the Ullmann-type reactions. But if successful, this functionalization would open new areas of CNT chemistry leading to enhanced in-solvent/matrix nanotube individualization. The copper-catalyzed Ullmann-type reaction is an attractive method for the formation of carbon-heteroatom and carbon-carbon bonds in organic synthesis. This condensation reaction is usually conducted at temperature as high as 200 oC, often in the presence of stoichiometric amounts of copper reagent and with activated aryl halides. However, a small amount of organic additive (e.g. diamines, amino acids, diols, 1,10-phenanthroline) can be applied in order to increase the solubility and stability of copper catalyst, and at the same time to allow performing the reaction under mild conditions. The copper (pre-)catalyst is prepared by in situ mixing of copper salt and the appropriate chelator. Our research is focused on the application of Ullmann-type reaction for the covalent functionalization of CNTs. Firstly, CNTs were chlorinated by using iodine trichloride (ICl3) in carbon tetrachloride (CCl4). This method involves formation of several chemical species (ICl, Cl2 and I2Cl6), but the most reactive is the dimer. The fact (that the dimer is the main individual in CCl4) is the reason for high reactivity and possibly high functionalization levels of CNTs. This method, indeed, yielded a notable amount of chlorine onto the MWCNT surface. The next step was the reaction of CNT-Cl with three substrates: aniline, iodobenzene and phenol for the formation C-N, C-C and C-O bonds, respectively, in the presence of 1,10-phenanthroline and cesium carbonate (Cs2CO3) as a base. As the CNT substrates, two multi-wall CNT (MWCNT) types were used: commercially available Nanocyl NC7000™ (9.6 nm diameter, 1.5 µm length, 90% purity) and thicker MWCNTs (in-house) synthesized in our laboratory using catalytic chemical vapour deposition (c-CVD). In-house CNTs had diameter ranging between 60-70 nm and length up to 300 µm. Since classical Ullmann reaction was found as suffering from poor yields, we have investigated the effect of various solvents (toluene, acetonitrile, dimethyl sulfoxide and N,N-dimethylformamide) on the coupling of substrates. Owing to the fact that the aryl halides show the reactivity order of I>Br>Cl>F, we have also investigated the effect of iodine presence on CNT surface on reaction yield. In this case, in first step we have used iodine monochloride instead of iodine trichloride. Finally, we have used the optimized reaction conditions with p-bromophenol and 1,2,4-trihydroxybenzene for the control of CNT dispersion.

Keywords: carbon nanotubes, coupling reaction, functionalization, Ullmann reaction

Procedia PDF Downloads 144
330 Right Atrial Tissue Morphology in Acquired Heart Diseases

Authors: Edite Kulmane, Mara Pilmane, Romans Lacis

Abstract:

Introduction: Acquired heart diseases remain one of the leading health care problems in the world. Changes in myocardium of the diseased hearts are complex and pathogenesis is still not fully clear. The aim of this study was to identify appearance and distribution of apoptosis, homeostasis regulating factors, and innervation and ischemia markers in right atrial tissue in different acquired heart diseases. Methods: During elective open heart surgery were taken right atrial tissue fragments from 12 patients. All patients were operated because of acquired heart diseases- aortic valve stenosis (5 patients), coronary heart disease (5 patients), coronary heart disease and secondary mitral insufficiency (1 patient) and mitral disease (1 patient). The mean age was (mean±SD) 70,2±7,0 years (range 58-83 years). The tissues were stained with haematoxylin and eosin methods for routine light-microscopical examination and for immunohistochemical detection of protein gene peptide 9.5 (PGP 9.5), human atrial natriuretic peptide (hANUP), vascular endothelial growth factor (VEGF), chromogranin A and endothelin. Apoptosis was detected by TUNEL method. Results: All specimens showed degeneration of cardiomyocytes with lysis of myofibrils, diffuse vacuolization especially in perinuclear region, different size of cells and their nuclei. The severe invasion of connective tissue was observed in main part of all fragments. The apoptotic index ranged from 24 to 91%. One specimen showed region of newly performed microvessels with cube shaped endotheliocytes that were positive for PGP 9.5, endothelin, chromogranin A and VEGF. From all fragments, taken from patients with coronary heart disease, there were observed numerous PGP 9.5-containing nerve fibres, except in patient with secondary mitral insufficiency, who showed just few PGP 9.5 positive nerves. In majority of specimens there were regions observed with cube shaped mixed -VEGF immunoreactive endocardial and epicardial cells. Only VEGF positive endothelial cells were observed just in few specimens. There was no significant difference of hANUP secreting cells among all specimens. All patients operated due to the coronary heart disease moderate to numerous number of chromogranin A positive cells were seen while in patients with aortic valve stenosis tissue demonstrated just few factor positive cells. Conclusions: Complex detection of different factors may indicate selectively disordered morphopathogenetical event of heart disease: decrease of PGP 9.5 nerves suggests the decreased innervation of organ; increased apoptosis indicates the cell death without ingrowth of connective tissue; persistent presence of hANUP proves the unchanged homeostasis of cardiomyocytes probably supported by expression of chromogranins. Finally, decrease of VEGF detects the regions of affected blood vessels in heart affected by acquired heart disease.

Keywords: heart, apoptosis, protein-gene peptide 9.5, atrial natriuretic peptide, vascular endothelial growth factor, chromogranin A, endothelin

Procedia PDF Downloads 272
329 Measuring Enterprise Growth: Pitfalls and Implications

Authors: N. Šarlija, S. Pfeifer, M. Jeger, A. Bilandžić

Abstract:

Enterprise growth is generally considered as a key driver of competitiveness, employment, economic development and social inclusion. As such, it is perceived to be a highly desirable outcome of entrepreneurship for scholars and decision makers. The huge academic debate resulted in the multitude of theoretical frameworks focused on explaining growth stages, determinants and future prospects. It has been widely accepted that enterprise growth is most likely nonlinear, temporal and related to the variety of factors which reflect the individual, firm, organizational, industry or environmental determinants of growth. However, factors that affect growth are not easily captured, instruments to measure those factors are often arbitrary, causality between variables and growth is elusive, indicating that growth is not easily modeled. Furthermore, in line with heterogeneous nature of the growth phenomenon, there is a vast number of measurement constructs assessing growth which are used interchangeably. Differences among various growth measures, at conceptual as well as at operationalization level, can hinder theory development which emphasizes the need for more empirically robust studies. In line with these highlights, the main purpose of this paper is twofold. Firstly, to compare structure and performance of three growth prediction models based on the main growth measures: Revenues, employment and assets growth. Secondly, to explore the prospects of financial indicators, set as exact, visible, standardized and accessible variables, to serve as determinants of enterprise growth. Finally, to contribute to the understanding of the implications on research results and recommendations for growth caused by different growth measures. The models include a range of financial indicators as lag determinants of the enterprises’ performances during the 2008-2013, extracted from the national register of the financial statements of SMEs in Croatia. The design and testing stage of the modeling used the logistic regression procedures. Findings confirm that growth prediction models based on different measures of growth have different set of predictors. Moreover, the relationship between particular predictors and growth measure is inconsistent, namely the same predictor positively related to one growth measure may exert negative effect on a different growth measure. Overall, financial indicators alone can serve as good proxy of growth and yield adequate predictive power of the models. The paper sheds light on both methodology and conceptual framework of enterprise growth by using a range of variables which serve as a proxy for the multitude of internal and external determinants, but are unlike them, accessible, available, exact and free of perceptual nuances in building up the model. Selection of the growth measure seems to have significant impact on the implications and recommendations related to growth. Furthermore, the paper points out to potential pitfalls of measuring and predicting growth. Overall, the results and the implications of the study are relevant for advancing academic debates on growth-related methodology, and can contribute to evidence-based decisions of policy makers.

Keywords: growth measurement constructs, logistic regression, prediction of growth potential, small and medium-sized enterprises

Procedia PDF Downloads 232
328 Development of an Automatic Control System for ex vivo Heart Perfusion

Authors: Pengzhou Lu, Liming Xin, Payam Tavakoli, Zhonghua Lin, Roberto V. P. Ribeiro, Mitesh V. Badiwala

Abstract:

Ex vivo Heart Perfusion (EVHP) has been developed as an alternative strategy to expand cardiac donation by enabling resuscitation and functional assessment of hearts donated from marginal donors, which were previously not accepted. EVHP parameters, such as perfusion flow (PF) and perfusion pressure (PP) are crucial for optimal organ preservation. However, with the heart’s constant physiological changes during EVHP, such as coronary vascular resistance, manual control of these parameters is rendered imprecise and cumbersome for the operator. Additionally, low control precision and the long adjusting time may lead to irreversible damage to the myocardial tissue. To solve this problem, an automatic heart perfusion system was developed by applying a Human-Machine Interface (HMI) and a Programmable-Logic-Controller (PLC)-based circuit to control PF and PP. The PLC-based control system collects the data of PF and PP through flow probes and pressure transducers. It has two control modes: the RPM-flow mode and the pressure mode. The RPM-flow control mode is an open-loop system. It influences PF through providing and maintaining the desired speed inputted through the HMI to the centrifugal pump with a maximum error of 20 rpm. The pressure control mode is a closed-loop system where the operator selects a target Mean Arterial Pressure (MAP) to control PP. The inputs of the pressure control mode are the target MAP, received through the HMI, and the real MAP, received from the pressure transducer. A PID algorithm is applied to maintain the real MAP at the target value with a maximum error of 1mmHg. The precision and control speed of the RPM-flow control mode were examined by comparing the PLC-based system to an experienced operator (EO) across seven RPM adjustment ranges (500, 1000, 2000 and random RPM changes; 8 trials per range) tested in a random order. System’s PID algorithm performance in pressure control was assessed during 10 EVHP experiments using porcine hearts. Precision was examined through monitoring the steady-state pressure error throughout perfusion period, and stabilizing speed was tested by performing two MAP adjustment changes (4 trials per change) of 15 and 20mmHg. A total of 56 trials were performed to validate the RPM-flow control mode. Overall, the PLC-based system demonstrated the significantly faster speed than the EO in all trials (PLC 1.21±0.03, EO 3.69±0.23 seconds; p < 0.001) and greater precision to reach the desired RPM (PLC 10±0.7, EO 33±2.7 mean RPM error; p < 0.001). Regarding pressure control, the PLC-based system has the median precision of ±1mmHg error and the median stabilizing times in changing 15 and 20mmHg of MAP are 15 and 19.5 seconds respectively. The novel PLC-based control system was 3 times faster with 60% less error than the EO for RPM-flow control. In pressure control mode, it demonstrates a high precision and fast stabilizing speed. In summary, this novel system successfully controlled perfusion flow and pressure with high precision, stability and a fast response time through a user-friendly interface. This design may provide a viable technique for future development of novel heart preservation and assessment strategies during EVHP.

Keywords: automatic control system, biomedical engineering, ex-vivo heart perfusion, human-machine interface, programmable logic controller

Procedia PDF Downloads 148
327 Assessment of Neurodevelopmental Needs in Duchenne Muscular Dystrophy

Authors: Mathula Thangarajh

Abstract:

Duchenne muscular dystrophy (DMD) is a severe form of X-linked muscular dystrophy caused by mutations in the dystrophin gene resulting in progressive skeletal muscle weakness. Boys with DMD also have significant cognitive disabilities. The intelligence quotient of boys with DMD, compared to peers, is approximately one standard deviation below average. Detailed neuropsychological testing has demonstrated that boys with DMD have a global developmental impairment, with verbal memory and visuospatial skills most significantly affected. Furthermore, the total brain volume and gray matter volume are lower in children with DMD compared to age-matched controls. These results are suggestive of a significant structural and functional compromise to the developing brain as a result of absent dystrophin protein expression. There is also some genetic evidence to suggest that mutations in the 3’ end of the DMD gene are associated with more severe neurocognitive problems. Our working hypothesis is that (i) boys with DMD do not make gains in neurodevelopmental skills compared to typically developing children and (ii) women carriers of DMD mutations may have subclinical cognitive deficits. We also hypothesize that there may be an intergenerational vulnerability of cognition, with boys of DMD-carrier mothers being more affected cognitively than boys of non-DMD-carrier mothers. The objectives of this study are: 1. Assess the neurodevelopment in boys with DMD at 4-time points and perform baseline neuroradiological assessment, 2. Assess cognition in biological mothers of DMD participants at baseline, 3. Assess possible correlation between DMD mutation and cognitive measures. This study also explores functional brain abnormalities in people with DMD by exploring how regional and global connectivity of the brain underlies executive function deficits in DMD. Such research can contribute to a better holistic understanding of the cognition alterations due to DMD and could potentially allow clinicians to create better-tailored treatment plans for the DMD population. There are four study visits for each participant (baseline, 2-4 weeks, 1 year, 18 months). At each visit, the participant completes the NIH Toolbox Cognition Battery, a validated psychometric measure that is recommended by NIH Common Data Elements for use in DMD. Visits 1, 3, and 4 also involve the administration of the BRIEF-2, ABAS-3, PROMIS/NeuroQoL, PedsQL Neuromuscular module 3.0, Draw a Clock Test, and an optional fMRI scan with the N-back matching task. We expect to enroll 52 children with DMD, 52 mothers of children with DMD, and 30 healthy control boys. This study began in 2020 during the height of the COVID-19 pandemic. Due to this, there were subsequent delays in recruitment because of travel restrictions. However, we have persevered and continued to recruit new participants for the study. We partnered with the Muscular Dystrophy Association (MDA) and helped advertise the study to interested families. Since then, we have had families from across the country contact us about their interest in the study. We plan to continue to enroll a diverse population of DMD participants to contribute toward a better understanding of Duchenne Muscular Dystrophy.

Keywords: neurology, Duchenne muscular dystrophy, muscular dystrophy, cognition, neurodevelopment, x-linked disorder, DMD, DMD gene

Procedia PDF Downloads 74
326 Mobile Genetic Elements in Trematode Himasthla Elongata Clonal Polymorphism

Authors: Anna Solovyeva, Ivan Levakin, Nickolai Galaktionov, Olga Podgornaya

Abstract:

Animals that reproduce asexually were thought to have the same genotypes within generations for a long time. However, some refuting examples were found, and mobile genetic elements (MGEs) or transposons are considered to be the most probable source of genetic instability. Dispersed nature and the ability to change their genomic localization enables MGEs to be efficient mutators. Hence the study of MGEs genomic impact requires an appropriate object which comprehends both representative amounts of various MGEs and options to evaluate the genomic influence of MGEs. Animals that reproduce asexually seem to be a decent model to study MGEs impact in genomic variability. We found a small marine trematode Himasthla elongata (Himasthlidae) to be a good model for such investigation as it has a small genome size, diverse MGEs and parthenogenetic stages in the lifecycle. In the current work, clonal diversity of cercaria was traced with an AFLP (Amplified fragment length polymorphism) method, diverse zones from electrophoretic patterns were cloned, and the nature of the fragments explored. Polymorphic patterns of individual cercariae AFLP-based fingerprints are enriched with retrotransposons of different families. The bulk of those sequences are represented by open reading frames of non-Long Terminal Repeats containing elements(non-LTR) yet Long-Terminal Repeats containing elements (LTR), to a lesser extent in variable figments of AFLP array. The CR1 elements expose both in polymorphic and conservative patterns are remarkably more frequent than the other non-LTR retrotransposons. This data was confirmed with shotgun sequencing-based on Illumina HiSeq 2500 platform. Individual cercaria of the same clone (i.e., originated from a single miracidium and inhabiting one host) has a various distribution of MGE families detected in sequenced AFLP patterns. The most numerous are CR1 and RTE-Bov retrotransposons, typical for trematode genomes. Also, we identified LTR-retrotransposons of Pao and Gypsy families among DNA transposons of CMC-EnSpm, Tc1/Mariner, MuLE-MuDR and Merlin families. We detected many of them in H. elongata transcriptome. Such uneven MGEs distribution in AFLP sequences’ sets reflects the different patterns of transposons spreading in cercarial genomes as transposons affect the genome in many ways (ectopic recombination, gene structure interruption, epigenetic silencing). It is considered that they play a key role in the origins of trematode clonal polymorphism. The authors greatly appreciate the help received at the Kartesh White Sea Biological Station of the Russian Academy of Sciences Zoological Institute. This work is funded with RSF 19-74-20102 and RFBR 17-04-02161 grants and the research program of the Zoological Institute of the Russian Academy of Sciences (project number AAAA-A19-119020690109-2).

Keywords: AFLP, clonal polymorphism, Himasthla elongata, mobile genetic elements, NGS

Procedia PDF Downloads 103
325 Solutions for Food-Safe 3D Printing

Authors: Geremew Geidare Kailo, Igor Gáspár, András Koris, Ivana Pajčin, Flóra Vitális, Vanja Vlajkov

Abstract:

Three-dimension (3D) printing, a very popular additive manufacturing technology, has recently undergone rapid growth and replaced the use of conventional technology from prototyping to producing end-user parts and products. The 3D Printing technology involves a digital manufacturing machine that produces three-dimensional objects according to designs created by the user via 3D modeling or computer-aided design/manufacturing (CAD/CAM) software. The most popular 3D printing system is Fused Deposition Modeling (FDM) or also called Fused Filament Fabrication (FFF). A 3D-printed object is considered food safe if it can have direct contact with the food without any toxic effects, even after cleaning, storing, and reusing the object. This work analyzes the processing timeline of the filament (material for 3D printing) from unboxing to the extrusion through the nozzle. It is an important task to analyze the growth of bacteria on the 3D printed surface and in gaps between the layers. By default, the 3D-printed object is not food safe after longer usage and direct contact with food (even though they use food-safe filaments), but there are solutions for this problem. The aim of this work was to evaluate the 3D-printed object from different perspectives of food safety. Firstly, testing antimicrobial 3D printing filaments from a food safety aspect since the 3D Printed object in the food industry may have direct contact with the food. Therefore, the main purpose of the work is to reduce the microbial load on the surface of a 3D-printed part. Coating with epoxy resin was investigated, too, to see its effect on mechanical strength, thermal resistance, surface smoothness and food safety (cleanability). Another aim of this study was to test new temperature-resistant filaments and the effect of high temperature on 3D printed materials to see if they can be cleaned with boiling or similar hi-temp treatment. This work proved that all three mentioned methods could improve the food safety of the 3D printed object, but the size of this effect variates. The best result we got was with coating with epoxy resin, and the object was cleanable like any other injection molded plastic object with a smooth surface. Very good results we got by boiling the objects, and it is good to see that nowadays, more and more special filaments have a food-safe certificate and can withstand boiling temperatures too. Using antibacterial filaments reduced bacterial colonies to 1/5, but the biggest advantage of this method is that it doesn’t require any post-processing. The object is ready out of the 3D printer. Acknowledgements: The research was supported by the Hungarian and Serbian bilateral scientific and technological cooperation project funded by the Hungarian National Office for Research, Development and Innovation (NKFI, 2019-2.1.11-TÉT-2020-00249) and the Ministry of Education, Science and Technological Development of the Republic of Serbia. The authors acknowledge the Hungarian University of Agriculture and Life Sciences’s Doctoral School of Food Science for the support in this study

Keywords: food safety, 3D printing, filaments, microbial, temperature

Procedia PDF Downloads 116
324 Policy Views of Sustainable Integrated Solution for Increased Synergy between Light Railways and Electrical Distribution Network

Authors: Mansoureh Zangiabadi, Shamil Velji, Rajendra Kelkar, Neal Wade, Volker Pickert

Abstract:

The EU has set itself a long-term goal of reducing greenhouse gas emissions by 80-95% of the 1990 levels by 2050 as set in the Energy Roadmap 2050. This paper reports on the European Union H2020 funded E-Lobster project which demonstrates tools and technologies, software and hardware in integrating the grid distribution, and the railway power systems with power electronics technologies (Smart Soft Open Point - sSOP) and local energy storage. In this context this paper describes the existing policies and regulatory frameworks of the energy market at European level with a special focus then at National level, on the countries where the members of the consortium are located, and where the demonstration activities will be implemented. By taking into account the disciplinary approach of E-Lobster, the main policy areas investigated includes electricity, energy market, energy efficiency, transport and smart cities. Energy storage will play a key role in enabling the EU to develop a low-carbon electricity system. In recent years, Energy Storage System (ESSs) are gaining importance due to emerging applications, especially electrification of the transportation sector and grid integration of volatile renewables. The need for storage systems led to ESS technologies performance improvements and significant price decline. This allows for opening a new market where ESSs can be a reliable and economical solution. One such emerging market for ESS is R+G management which will be investigated and demonstrated within E-Lobster project. The surplus of energy in one type of power system (e.g., due to metro braking) might be directly transferred to the other power system (or vice versa). However, it would usually happen at unfavourable instances when the recipient does not need additional power. Thus, the role of ESS is to enhance advantages coming from interconnection of the railway power systems and distribution grids by offering additional energy buffer. Consequently, the surplus/deficit of energy in, e.g. railway power systems, is not to be immediately transferred to/from the distribution grid but it could be stored and used when it is really needed. This will assure better energy management exchange between the railway power systems and distribution grids and lead to more efficient loss reduction. In this framework, to identify the existing policies and regulatory frameworks is crucial for the project activities and for the future development of business models for the E-Lobster solutions. The projections carried out by the European Commission, the Member States and stakeholders and their analysis indicated some trends, challenges, opportunities and structural changes needed to design the policy measures to provide the appropriate framework for investors. This study will be used as reference for the discussion in the envisaged workshops with stakeholders (DSOs and Transport Managers) in the E-Lobster project.

Keywords: light railway, electrical distribution network, Electrical Energy Storage, policy

Procedia PDF Downloads 113
323 Numerical Analysis of NOₓ Emission in Staged Combustion for the Optimization of Once-Through-Steam-Generators

Authors: Adrien Chatel, Ehsan Askari Mahvelati, Laurent Fitschy

Abstract:

Once-Through-Steam-Generators are commonly used in the oil-sand industry in the heavy fuel oil extraction process. They are composed of three main parts: the burner, the radiant and convective sections. Natural gas is burned through staged diffusive flames stabilized by the burner. The heat generated by the combustion is transferred to the water flowing through the piping system in the radiant and convective sections. The steam produced within the pipes is then directed to the ground to reduce the oil viscosity and allow its pumping. With the rapid development of the oil-sand industry, the number of OTSG in operation has increased as well as the associated emissions of environmental pollutants, especially the Nitrous Oxides (NOₓ). To limit the environmental degradation, various international environmental agencies have established regulations on the pollutant discharge and pushed to reduce the NOₓ release. To meet these constraints, OTSG constructors have to rely on more and more advanced tools to study and predict the NOₓ emission. With the increase of the computational resources, Computational Fluid Dynamics (CFD) has emerged as a flexible tool to analyze the combustion and pollutant formation process. Moreover, to optimize the burner operating condition regarding the NOx emission, field characterization and measurements are usually accomplished. However, these kinds of experimental campaigns are particularly time-consuming and sometimes even impossible for industrial plants with strict operation schedule constraints. Therefore, the application of CFD seems to be more adequate in order to provide guidelines on the NOₓ emission and reduction problem. In the present work, two different software are employed to simulate the combustion process in an OTSG, namely the commercial software ANSYS Fluent and the open source software OpenFOAM. RANS (Reynolds-Averaged Navier–Stokes) equations combined with the Eddy Dissipation Concept to model the combustion and closed by the k-epsilon model are solved. A mesh sensitivity analysis is performed to assess the independence of the solution on the mesh. In the first part, the results given by the two software are compared and confronted with experimental data as a mean to assess the numerical modelling. Flame temperatures and chemical composition are used as reference fields to perform this validation. Results show a fair agreement between experimental and numerical data. In the last part, OpenFOAM is employed to simulate several operating conditions, and an Emission Characteristic Map of the combustion system is generated. The sources of high NOₓ production inside the OTSG are pointed and correlated to the physics of the flow. CFD is, therefore, a useful tool for providing an insight into the NOₓ emission phenomena in OTSG. Sources of high NOₓ production can be identified, and operating conditions can be adjusted accordingly. With the help of RANS simulations, an Emission Characteristics Map can be produced and then be used as a guide for a field tune-up.

Keywords: combustion, computational fluid dynamics, nitrous oxides emission, once-through-steam-generators

Procedia PDF Downloads 92
322 Creative Mapping Landuse and Human Activities: From the Inventories of Factories to the History of the City and Citizens

Authors: R. Tamborrino, F. Rinaudo

Abstract:

Digital technologies offer possibilities to effectively convert historical archives into instruments of knowledge able to provide a guide for the interpretation of historical phenomena. Digital conversion and management of those documents allow the possibility to add other sources in a unique and coherent model that permits the intersection of different data able to open new interpretations and understandings. Urban history uses, among other sources, the inventories that register human activities in a specific space (e.g. cadastres, censuses, etc.). The geographic localisation of that information inside cartographic supports allows for the comprehension and visualisation of specific relationships between different historical realities registering both the urban space and the peoples living there. These links that merge the different nature of data and documentation through a new organisation of the information can suggest a new interpretation of other related events. In all these kinds of analysis, the use of GIS platforms today represents the most appropriate answer. The design of the related databases is the key to realise the ad-hoc instrument to facilitate the analysis and the intersection of data of different origins. Moreover, GIS has become the digital platform where it is possible to add other kinds of data visualisation. This research deals with the industrial development of Turin at the beginning of the 20th century. A census of factories realized just prior to WWI provides the opportunity to test the potentialities of GIS platforms for the analysis of urban landscape modifications during the first industrial development of the town. The inventory includes data about location, activities, and people. GIS is shaped in a creative way linking different sources and digital systems aiming to create a new type of platform conceived as an interface integrating different kinds of data visualisation. The data processing allows linking this information to an urban space, and also visualising the growth of the city at that time. The sources, related to the urban landscape development in that period, are of a different nature. The emerging necessity to build, enlarge, modify and join different buildings to boost the industrial activities, according to their fast development, is recorded by different official permissions delivered by the municipality and now stored in the Historical Archive of the Municipality of Turin. Those documents, which are reports and drawings, contain numerous data on the buildings themselves, including the block where the plot is located, the district, and the people involved such as the owner, the investor, and the engineer or architect designing the industrial building. All these collected data offer the possibility to firstly re-build the process of change of the urban landscape by using GIS and 3D modelling technologies thanks to the access to the drawings (2D plans, sections and elevations) that show the previous and the planned situation. Furthermore, they access information for different queries of the linked dataset that could be useful for different research and targets such as economics, biographical, architectural, or demographical. By superimposing a layer of the present city, the past meets to the present-industrial heritage, and people meet urban history.

Keywords: digital urban history, census, digitalisation, GIS, modelling, digital humanities

Procedia PDF Downloads 173
321 The Derivation of a Four-Strain Optimized Mohr's Circle for Use in Experimental Reinforced Concrete Research

Authors: Edvard P. G. Bruun

Abstract:

One of the best ways of improving our understanding of reinforced concrete is through large-scale experimental testing. The gathered information is critical in making inferences about structural mechanics and deriving the mathematical models that are the basis for finite element analysis programs and design codes. An effective way of measuring the strains across a region of a specimen is by using a system of surface mounted Linear Variable Differential Transformers (LVDTs). While a single LVDT can only measure the linear strain in one direction, by combining several measurements at known angles a Mohr’s circle of strain can be derived for the whole region under investigation. This paper presents a method that can be used by researchers, which improves the accuracy and removes experimental bias in the calculation of the Mohr’s circle, using four rather than three independent strain measurements. Obtaining high quality strain data is essential, since knowing the angular deviation (shear strain) and the angle of principal strain in the region are important properties in characterizing the governing structural mechanics. For example, the Modified Compression Field Theory (MCFT) developed at the University of Toronto, is a rotating crack model that requires knowing the direction of the principal stress and strain, and then calculates the average secant stiffness in this direction. But since LVDTs can only measure average strains across a plane (i.e., between discrete points), localized cracking and spalling that typically occur in reinforced concrete, can lead to unrealistic results. To build in redundancy and improve the quality of the data gathered, the typical experimental setup for a large-scale shell specimen has four independent directions (X, Y, H, and V) that are instrumented. The question now becomes, which three should be used? The most common approach is to simply discard one of the measurements. The problem is that this can produce drastically different answers, depending on the three strain values that are chosen. To overcome this experimental bias, and to avoid simply discarding valuable data, a more rigorous approach would be to somehow make use of all four measurements. This paper presents the derivation of a method to draw what is effectively a Mohr’s circle of 'best-fit', which optimizes the circle by using all four independent strain values. The four-strain optimized Mohr’s circle approach has been utilized to process data from recent large-scale shell tests at the University of Toronto (Ruggiero, Proestos, and Bruun), where analysis of the test data has shown that the traditional three-strain method can lead to widely different results. This paper presents the derivation of the method and shows its application in the context of two reinforced concrete shells tested in pure torsion. In general, the constitutive models and relationships that characterize reinforced concrete are only as good as the experimental data that is gathered – ensuring that a rigorous and unbiased approach exists for calculating the Mohr’s circle of strain during an experiment, is of utmost importance to the structural research community.

Keywords: reinforced concrete, shell tests, Mohr’s circle, experimental research

Procedia PDF Downloads 196
320 Parameter Selection and Monitoring for Water-Powered Percussive Drilling in Green-Fields Mineral Exploration

Authors: S. J. Addinell, T. Richard, B. Evans

Abstract:

The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising downhole water powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barron cover. This system has shown superior rates of penetration in water-rich hard rock formations at depths exceeding 500 meters. Several key challenges exist regarding the deployment and use of these bottom hole assemblies for mineral exploration, and this paper discusses some of the key technical challenges. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process is presented and shows a strong power law relationship for particle size distributions. Several percussive drilling parameters such as RPM, applied fluid pressure and weight on bit have been shown to influence the particle size distributions of the cuttings generated. This has direct influence on other drilling parameters such as flow loop performance, cuttings dewatering, and solids control. Real-time, accurate knowledge of percussive system operating parameters will assist the driller in maximising the efficiency of the drilling process. The applied fluid flow, fluid pressure, and rock properties are known to influence the natural oscillating frequency of the percussive hammer, but this paper also shows that drill bit design, drill bit wear and the applied weight on bit can also influence the oscillation frequency. Due to the changing drilling conditions and therefore changing operating parameters, real-time understanding of the natural operating frequency is paramount to achieving system optimisation. Several techniques to understand the oscillating frequency have been investigated and presented. With a conventional top drive drilling rig, spectral analysis of applied fluid pressure, hydraulic feed force pressure, hold back pressure and drill string vibrations have shown the presence of the operating frequency of the bottom hole tooling. Unfortunately, however, with the implementation of a coiled tubing drilling rig, implementing a positive displacement downhole motor to provide drill bit rotation, these signals are not available for interrogation at the surface and therefore another method must be considered. The investigation and analysis of ground vibrations using geophone sensors, similar to seismic-while-drilling techniques have indicated the presence of the natural oscillating frequency of the percussive hammer. This method is shown to provide a robust technique for the determination of the downhole percussive oscillation frequency when used with a coiled tubing drill rig.

Keywords: cuttings characterization, drilling optimization, oscillation frequency, percussive drilling, spectral analysis

Procedia PDF Downloads 213
319 Edmonton Urban Growth Model as a Support Tool for the City Plan Growth Scenarios Development

Authors: Sinisa J. Vukicevic

Abstract:

Edmonton is currently one of the youngest North American cities and has achieved significant growth over the past 40 years. Strong urban shift requires a new approach to how the city is envisioned, planned, and built. This approach is evidence-based scenario development, and an urban growth model was a key support tool in framing Edmonton development strategies, developing urban policies, and assessing policy implications. The urban growth model has been developed using the Metronamica software platform. The Metronamica land use model evaluated the dynamic of land use change under the influence of key development drivers (population and employment), zoning, land suitability, and land and activity accessibility. The model was designed following the Big City Moves ideas: become greener as we grow, develop a rebuildable city, ignite a community of communities, foster a healing city, and create a city of convergence. The Big City Moves were converted to three development scenarios: ‘Strong Central City’, ‘Node City’, and ‘Corridor City’. Each scenario has a narrative story that expressed scenario’s high level goal, scenario’s approach to residential and commercial activities, to transportation vision, and employment and environmental principles. Land use demand was calculated for each scenario according to specific density targets. Spatial policies were analyzed according to their level of importance within the policy set definition for the specific scenario, but also through the policy measures. The model was calibrated on the way to reproduce known historical land use pattern. For the calibration, we used 2006 and 2011 land use data. The validation is done independently, which means we used the data we did not use for the calibration. The model was validated with 2016 data. In general, the modeling process contain three main phases: ‘from qualitative storyline to quantitative modelling’, ‘model development and model run’, and ‘from quantitative modelling to qualitative storyline’. The model also incorporates five spatial indicators: distance from residential to work, distance from residential to recreation, distance to river valley, urban expansion and habitat fragmentation. The major finding of this research could be looked at from two perspectives: the planning perspective and technology perspective. The planning perspective evaluates the model as a tool for scenario development. Using the model, we explored the land use dynamic that is influenced by a different set of policies. The model enables a direct comparison between the three scenarios. We explored the similarities and differences of scenarios and their quantitative indicators: land use change, population change (and spatial allocation), job allocation, density (population, employment, and dwelling unit), habitat connectivity, proximity to objects of interest, etc. From the technology perspective, the model showed one very important characteristic: the model flexibility. The direction for policy testing changed many times during the consultation process and model flexibility in applying all these changes was highly appreciated. The model satisfied our needs as scenario development and evaluation tool, but also as a communication tool during the consultation process.

Keywords: urban growth model, scenario development, spatial indicators, Metronamica

Procedia PDF Downloads 73
318 Drivers of the Performance of Members of a Social Incubator Considering the Values of Work: A Qualitative Study with Social Entrepreneurs

Authors: Leticia Lengler, Vania Estivalete, Vivian Flores Costa, Tais De Andrade, Lisiane Fellini Faller

Abstract:

Social entrepreneurship has emerged and driven a new development perspective, and as the literature mentions, it is based on innovation, and mainly, on the creation of social value, rather than personal wealth and shareholders. In this field of study, one of the focuses of discussion refers to the distinct characteristics of the individuals responsible for socially directed initiatives, named as social entrepreneurs. To contribute to this perspective, the present study aims to identify the values related to work that guide the performance of social entrepreneurs, members of enterprises that have developed themselves within a social incubator at a federal institution of higher education in Brazil. Each person's value system is present in different facets of his life, manifesting himself in his choices and in the way he conducts the relationship with other people in society. Especially the values of work, the focus of this research, play a significant role in organizational studies, since they are considered one of the important guiding principles of the behavior of individuals in the work environment. Regarding the method of the study, a descriptive and qualitative research was carried out. In the data collection, 24 entrepreneurs, members of five different enterprises belonging to the social incubator, were interviewed. The research instrument consisted of three open questions, which could be answered with the support of a "disc of values", an artifact organized to clearly demonstrate the values of the work to the respondents. The analysis of the interviews took into account the categories defined a priori, based on the model proposed by previous authors who validated these constructs within their research contexts, contemplating the following dimensions: Self-determination and stimulation; Safety; Conformity; Universalism and benevolence; Achievement; and Power. It should be noted that, in order to provide a better understanding of the interviewees, in the "disc of values" used in the research, these dimensions were represented by the objectives that define them, being respectively: Challenge; Financial independence; Commitment; Welfare of others; Personal success; And Power. Some preliminary results show that, as guiding principles of the investigation, priority is given to work values related to Self-determination and stimulation, Conformity and Universalism and benevolence. Such findings point to the importance given by these individuals to independent thinking and acting, as well as to novelty and constant challenge. Still, they demonstrate the appreciation of commitment to their enterprise, the people who make it and the quality of their work. They also point to the relevance of the possibility of contributing to the greater social good, that is, of the search for the well-being of close people and of society, as it is implied in models of social entrepreneurship coming from literature. With a lower degree of priority, the values denominated Safety and Realization, as the financial question at work and the search for satisfaction and personal success, through the use of socially recognized skills were mentioned aspects with little emphasis by social entrepreneurs. The Power value was not considered as guiding principle of the work for the respondents.

Keywords: qualitative study, social entrepreneur, social incubator, values of work

Procedia PDF Downloads 237