Search results for: small business barriers
935 Streamlining the Fuzzy Front-End and Improving the Usability of the Tools Involved
Authors: Michael N. O'Sullivan, Con Sheahan
Abstract:
Researchers have spent decades developing tools and techniques to aid teams in the new product development (NPD) process. Despite this, it is evident that there is a huge gap between their academic prevalence and their industry adoption. For the fuzzy front-end, in particular, there is a wide range of tools to choose from, including the Kano Model, the House of Quality, and many others. In fact, there are so many tools that it can often be difficult for teams to know which ones to use and how they interact with one another. Moreover, while the benefits of using these tools are obvious to industrialists, they are rarely used as they carry a learning curve that is too steep and they become too complex to manage over time. In essence, it is commonly believed that they are simply not worth the effort required to learn and use them. This research explores a streamlined process for the fuzzy front-end, assembling the most effective tools and making them accessible to everyone. The process was developed iteratively over the course of 3 years, following over 80 final year NPD teams from engineering, design, technology, and construction as they carried a product from concept through to production specification. Questionnaires, focus groups, and observations were used to understand the usability issues with the tools involved, and a human-centred design approach was adopted to produce a solution to these issues. The solution takes the form of physical toolkit, similar to a board game, which allows the team to play through an example of a new product development in order to understand the process and the tools, before using it for their own product development efforts. A complimentary website is used to enhance the physical toolkit, and it provides more examples of the tools being used, as well as deeper discussions on each of the topics, allowing teams to adapt the process to their skills, preferences and product type. Teams found the solution very useful and intuitive and experienced significantly less confusion and mistakes with the process than teams who did not use it. Those with a design background found it especially useful for the engineering principles like Quality Function Deployment, while those with an engineering or technology background found it especially useful for design and customer requirements acquisition principles, like Voice of the Customer. Products developed using the toolkit are added to the website as more examples of how it can be used, creating a loop which helps future teams understand how the toolkit can be adapted to their project, whether it be a small consumer product or a large B2B service. The toolkit unlocks the potential of these beneficial tools to those in industry, both for large, experienced teams and for inexperienced start-ups. It allows users to assess the market potential of their product concept faster and more effectively, arriving at the product design stage with technical requirements prioritized according to their customers’ needs and wants.Keywords: new product development, fuzzy front-end, usability, Kano model, quality function deployment, voice of customer
Procedia PDF Downloads 108934 Mathematical Model to Simulate Liquid Metal and Slag Accumulation, Drainage and Heat Transfer in Blast Furnace Hearth
Authors: Hemant Upadhyay, Tarun Kumar Kundu
Abstract:
It is utmost important for a blast furnace operator to understand the mechanisms governing the liquid flow, accumulation, drainage and heat transfer between various phases in blast furnace hearth for a stable and efficient blast furnace operation. Abnormal drainage behavior may lead to high liquid build up in the hearth. Operational problems such as pressurization, low wind intake, and lower material descent rates, normally be encountered if the liquid levels in the hearth exceed a critical limit when Hearth coke and Deadman start to float. Similarly, hot metal temperature is an important parameter to be controlled in the BF operation; it should be kept at an optimal level to obtain desired product quality and a stable BF performance. It is not possible to carry out any direct measurement of above due to the hostile conditions in the hearth with chemically aggressive hot liquids. The objective here is to develop a mathematical model to simulate the variation in hot metal / slag accumulation and temperature during the tapping of the blast furnace based on the computed drainage rate, production rate, mass balance, heat transfer between metal and slag, metal and solids, slag and solids as well as among the various zones of metal and slag itself. For modeling purpose, the BF hearth is considered as a pressurized vessel, filled with solid coke particles. Liquids trickle down in hearth from top and accumulate in voids between the coke particles which are assumed thermally saturated. A set of generic mass balance equations gives the amount of metal and slag intake in hearth. A small drainage (tap hole) is situated at the bottom of the hearth and flow rate of liquids from tap hole is computed taking in account the amount of both the phases accumulated their level in hearth, pressure from gases in the furnace and erosion behaviors of tap hole itself. Heat transfer equations provide the exchange of heat between various layers of liquid metal and slag, and heat loss to cooling system through refractories. Based on all that information a dynamic simulation is carried out which provides real time information of liquids accumulation in hearth before and during tapping, drainage rate and its variation, predicts critical event timings during tapping and expected tapping temperature of metal and slag on preset time intervals. The model is in use at JSPL, India BF-II and its output is regularly cross-checked with actual tapping data, which are in good agreement.Keywords: blast furnace, hearth, deadman, hotmetal
Procedia PDF Downloads 185933 Stuttering Persistence in Children: Effectiveness of the Psicodizione Method in a Small Italian Cohort
Authors: Corinna Zeli, Silvia Calati, Marco Simeoni, Chiara Comastri
Abstract:
Developmental stuttering affects about 10% of preschool children; although the high percentage of natural recovery, a quarter of them will become an adult who stutters. An effective early intervention should help those children with high persistence risk for the future. The Psicodizione method for early stuttering is an Italian behavior indirect treatment for preschool children who stutter in which method parents act as good guides for communication, modeling their own fluency. In this study, we give a preliminary measure to evaluate the long-term effectiveness of Psicodizione method on stuttering preschool children with a high persistence risk. Among all Italian children treated with the Psicodizione method between 2018 and 2019, we selected 8 kids with at least 3 high risk persistence factors from the Illinois Prediction Criteria proposed by Yairi and Seery. The factors chosen for the selection were: one parent who stutters (1pt mother; 1.5pt father), male gender, ≥ 4 years old at onset; ≥ 12 months from onset of symptoms before treatment. For this study, the families were contacted after an average period of time of 14,7 months (range 3 - 26 months). Parental reports were gathered with a standard online questionnaire in order to obtain data reflecting fluency from a wide range of the children’s life situations. The minimum worthwhile outcome was set at "mild evidence" in a 5 point Likert scale (1 mild evidence- 5 high severity evidence). A second group of 6 children, among those treated with the Piscodizione method, was selected as high potential for spontaneous remission (low persistence risk). The children in this group had to fulfill all the following criteria: female gender, symptoms for less than 12 months (before treatment), age of onset <4 years old, none of the parents with persistent stuttering. At the time of this follow-up, the children were aged 6–9 years, with a mean of 15 months post-treatment. Among the children in the high persistence risk group, 2 (25%) hadn’t had stutter anymore, and 3 (37,5%) had mild stutter based on parental reports. In the low persistency risk group, the children were aged 4–6 years, with a mean of 14 months post-treatment, and 5 (84%) hadn’t had stutter anymore (for the past 16 months on average).62,5% of children at high risk of persistence after Psicodizione treatment showed mild evidence of stutter at most. 75% of parents confirmed a better fluency than before the treatment. The low persistence risk group seemed to be representative of spontaneous recovery. This study’s design could help to better evaluate the success of the proposed interventions for stuttering preschool children and provides a preliminary measure of the effectiveness of the Psicodizione method on high persistence risk children.Keywords: early treatment, fluency, preschool children, stuttering
Procedia PDF Downloads 218932 Beyond the Flipped Classroom: A Tool to Promote Autonomy, Cooperation, Differentiation and the Pleasure of Learning
Authors: Gabriel Michel
Abstract:
The aim of our research is to find solutions for adapting university teaching to today's students and companies. To achieve this, we have tried to change the posture and behavior of those involved in the learning situation by promoting other skills. There is a gap between the expectations and functioning of students and university teaching. At the same time, the business world needs employees who are obviously competent and proficient in technology, but who are also imaginative, flexible, able to communicate, learn on their own and work in groups. These skills are rarely developed as a goal at university. The flipped classroom has been one solution. Thanks to digital tools such as Moodle, for example, but the model behind them is still centered on teachers and classic learning scenarios: it makes course materials available without really involving them and encouraging them to cooperate. It's against this backdrop that we've conducted action research to explore the possibility of changing the way we learn (rather than teach) by changing the posture of both the classic student and the teacher. We hypothesized that a tool we developed would encourage autonomy, the possibility of progressing at one's own pace, collaboration and learning using all available resources(other students, course materials, those on the web and the teacher/facilitator). Experimentation with this tool was carried out with around thirty German and French first-year students at the Université de Lorraine in Metz (France). The projected changesin the groups' learning situations were as follows: - use the flipped classroom approach but with a few traditional presentations by the teacher (materials having been put on a server) and lots of collective case solving, - engage students in their learning by inviting them to set themselves a primary objective from the outset, e.g. “Assimilating 90% of the course”, and secondary objectives (like a to-do list) such as “create a new case study for Tuesday”, - encourage students to take control of their learning (knowing at all times where they stand and how far they still have to go), - develop cooperation: the tool should encourage group work, the search for common solutions and the exchange of the best solutions with other groups. Those who have advanced much faster than the others, or who already have expertise in a subject, can become tutors for the others. A student can also present a case study he or she has developed, for example, or share materials found on the web or produced by the group, as well as evaluating the productions of others, - etc… A questionnaire and analysis of assessment results showed that the test group made considerable progress compared with a similar control group. These results confirmed our hypotheses. Obviously, this tool is only effective if the organization of teaching is adapted and if teachers are willing to change the way they work.Keywords: pedagogy, cooperation, university, learning environment
Procedia PDF Downloads 22931 A Model for a Continuous Professional Development Program for Early Childhood Teachers in Villages: Insights from the Coaching Pilot in Indonesia
Authors: Ellen Patricia, Marilou Hyson
Abstract:
Coaching has been showing great potential to strengthen the impact of brief group trainings and help early childhood teachers solve specific problems at work with the goal of raising the quality of early childhood services. However, there have been some doubts about the benefits that village teachers can receive from coaching. It is perceived that village teachers may struggle with the thinking skills needed to make coaching beneficial. Furthermore, there are reservations about whether principals and supervisors in villages are open to coaching’s facilitative approach, as opposed to the directive approach they have been using. As such, the use of coaching to develop the professionalism of early childhood teachers in the villages needs to be examined. The Coaching Pilot for early childhood teachers in Indonesia villages provides insights for the above issues. The Coaching Pilot is part of the ECED Frontline Pilot, which is a collaboration project between the Government of Indonesia and the World Bank with the support from the Australian Government (DFAT). The Pilot started with coordinated efforts with the local government in two districts to select principals and supervisors who have been equipped with basic knowledge about early childhood education to take part in 2-days coaching training. Afterwards, the participants were asked to collect 25 hours of coaching early childhood teachers who have participated in the Enhanced Basic Training for village teachers. The participants who completed this requirement were then invited to come for an assessment of their coaching skills. Following that, a qualitative evaluation was conducted using in-depth interviews and Focus Group Discussion techniques. The evaluation focuses on the impact of the coaching pilot in helping the village teachers to develop in their professionalism, as well as on the sustainability of the intervention. Results from the evaluation indicated that although their low education may limit their thinking skills, village teachers benefited from the coaching that they received. Moreover, the evaluation results also suggested that with enough training and support, principals and supervisors in the villages were able to provide an adequate coaching service for the teachers. On top of that, beyond this small start, interest is growing, both within the pilot districts and even beyond, due to word of mouth of the benefits that the Coaching Pilot has created. The districts where coaching was piloted have planned to continue the coaching program, since a number of early childhood teachers have requested to be coached, and a number of principals and supervisors have also requested to be trained as a coach. Furthermore, the Association for Early Childhood Educators in Indonesia has started to adopt coaching into their program. Although further research is needed, the Coaching Pilot suggests that coaching can positively impact early childhood teachers in villages, and village principals and supervisors can become a promising source of future coaches. As such, coaching has a significant potential to become a sustainable model for a continuous professional development program for early childhood teachers in villages.Keywords: coaching, coaching pilot, early childhood teachers, principals and supervisors, village teachers
Procedia PDF Downloads 240930 Association between Maternal Personality and Postnatal Mother-to-Infant Bonding
Authors: Tessa Sellis, Marike A. Wierda, Elke Tichelman, Mirjam T. Van Lohuizen, Marjolein Berger, François Schellevis, Claudi Bockting, Lilian Peters, Huib Burger
Abstract:
Introduction: Most women develop a healthy bond with their children, however, adequate mother-to-infant bonding cannot be taken for granted. Mother-to-infant bonding refers to the feelings and emotions experienced by the mother towards her child. It is an ongoing process that starts during pregnancy and develops during the first year postpartum and likely throughout early childhood. The prevalence of inadequate bonding ranges from 7 to 11% in the first weeks postpartum. An impaired mother-to-infant bond can cause long-term complications for both mother and child. Very little research has been conducted on the direct relationship between the personality of the mother and mother-to-infant bonding. This study explores the associations between maternal personality and postnatal mother-to-infant bonding. The main hypothesis is that there is a relationship between neuroticism and mother-to-infant bonding. Methods: Data for this study were used from the Pregnancy Anxiety and Depression Study (2010-2014), which examined symptoms of and risk factors for anxiety or depression during pregnancy and the first year postpartum of 6220 pregnant women who received primary, secondary or tertiary care in the Netherlands. The study was expanded in 2015 to investigate postnatal mother-to-infant bonding. For the current research 3836 participants were included. During the first trimester of gestation, baseline characteristics, as well as personality, were measured through online questionnaires. Personality was measured by the NEO Five Factor Inventory (NEO-FFI), which covers the big five of personality (neuroticism, extraversion, openness, altruism and conscientiousness). Mother-to-infant bonding was measured postpartum by the Postpartum Bonding Questionnaire (PBQ). Univariate linear regression analysis was performed to estimate the associations. Results: 5% of the PBQ-respondents reported impaired bonding. A statistically significant association was found between neuroticism and mother-to-infant bonding (p < .001): mothers scoring higher on neuroticism, reported a lower score on mother-to-infant bonding. In addition, a positive correlation was found between the personality traits extraversion (b: -.081), openness (b: -.014), altruism (b: -.067), conscientiousness (b: -.060) and mother-to-infant bonding. Discussion: This study is one of the first to demonstrate a direct association between the personality of the mother and mother-to-infant bonding. A statistically significant relationship has been found between neuroticism and mother-to-infant bonding, however, the percentage of variance predictable by a personality dimension is very small. This study has examined one part of the multi-factorial topic of mother-to-infant bonding and offers more insight into the rarely investigated and complex matter of mother-to-infant bonding. For midwives, it is important recognize the risks for impaired bonding and subsequently improve policy for women at risk.Keywords: mother-to-infant bonding, personality, postpartum, pregnancy
Procedia PDF Downloads 364929 Combination of Silver-Curcumin Nanoparticle for the Treatment of Root Canal Infection
Authors: M. Gowri, E. K. Girija, V. Ganesh
Abstract:
Background and Significance: Among the dental infections, inflammation and infection of the root canal are common among all age groups. Currently, the management of root canal infections involves cleaning the canal with powerful irrigants followed by intracanal medicament application. Though these treatments have been in vogue for a long time, root canal failures do occur. Treatment for root canal infections is limited due to the anatomical complexity in terms of small micrometer volumes and poor penetration of drugs. Thus, infections of the root canal seem to be a challenge that demands development of new agents that can eradicate C. albicans. Methodology: In the present study, we synthesized and screened silver-curcumin nanoparticle against Candida albicans. Detailed molecular studies were carried out with silver-curcumin nanoparticle on C. albicans pathogenicity. Morphological cell damage and antibiofilm activity of silver-curcumin nanoparticle on C. albicans was studied using scanning electron microscopy (SEM). Biochemical evidence for membrane damage was studied using flow cytometry. Further, the antifungal activity of silver-curcumin nanoparticle was evaluated in an ex vivo dentinal tubule infection model. Results: Screening data showed that silver-curcumin nanoparticle was active against C. albicans. Silver-curcumin nanoparticle exerted time kill effect and post antifungal effect. When used in combination with fluconazole or nystatin, silver-curcumin nanoparticle revealed a minimum inhibitory concentration (MIC) decrease for both drugs used. In-depth molecular studies with silver-curcumin nanoparticle on C. albicans showed that silver-curcumin nanoparticle inhibited yeast to hyphae (Y-H) conversion. Further, SEM images of C. albicans showed that silver-curcumin nanoparticle caused membrane damage and inhibited biofilm formation. Biochemical evidence for membrane damage was confirmed by increased propidium iodide (PI) uptake in flow cytometry. Further, the antifungal activity of silver-curcumin nanoparticle was evaluated in an ex vivo dentinal tubule infection model, which mimics human tooth root canal infection. Confocal laser scanning microscopy studies showed eradication of C. albicans and reduction in colony forming unit (CFU) after 24 h treatment in the infected tooth samples in this model. Conclusion: The results of this study can pave the way for developing new antifungal agents with well deciphered mechanisms of action and can be a promising antifungal agent or medicament against root canal infection.Keywords: C. albicans, ex vivo dentine model, inhibition of biofilm formation, root canal infection, yeast to hyphae conversion inhibition
Procedia PDF Downloads 208928 Assessment of the Efficacy of Routine Medical Tests in Screening Medical Radiation Staff in Shiraz University of Medical Sciences Educational Centers
Authors: Z. Razi, S. M. J. Mortazavi, N. Shokrpour, Z. Shayan, F. Amiri
Abstract:
Long-term exposure to low doses of ionizing radiation occurs in radiation health care workplaces. Although doses in health professions are generally very low, there are still matters of concern. The radiation safety program promotes occupational radiation safety through accurate and reliable monitoring of radiation workers in order to effectively manage radiation protection. To achieve this goal, it has become mandatory to implement health examination periodically. As a result, based on the hematological alterations, working populations with a common occupational radiation history are screened. This paper calls into question the effectiveness of blood component analysis as a screening program which is mandatory for medical radiation workers in some countries. This study details the distribution and trends of changes in blood components, including white blood cells (WBCs), red blood cells (RBCs) and platelets as well as received cumulative doses from occupational radiation exposure. This study was conducted among 199 participants and 100 control subjects at the medical imaging departments at the central hospital of Shiraz University of Medical Sciences during the years 2006–2010. Descriptive and analytical statistics, considering the P-value<0.05 as statistically significance was used for data analysis. The results of this study show that there is no significant difference between the radiation workers and controls regarding WBCs and platelet count during 4 years. Also, we have found no statistically significant difference between the two groups with respect to RBCs. Besides, no statistically significant difference was observed with respect to RBCs with regards to gender, which has been analyzed separately because of the lower reference range for normal RBCs levels in women compared to men and. Moreover, the findings confirm that in a separate evaluation between WBCs count and the personnel’s working experience and their annual exposure dose, results showed no linear correlation between the three variables. Since the hematological findings were within the range of control levels, it can be concluded that the radiation dosage (which was not more than 7.58 mSv in this study) had been too small to stimulate any quantifiable change in medical radiation worker’s blood count. Thus, use of more accurate method for screening program based on the working profile of the radiation workers and their accumulated dose is suggested. In addition, complexity of radiation-induced functions and the influence of various factors on blood count alteration should be taken into account.Keywords: blood cell count, mandatory testing, occupational exposure, radiation
Procedia PDF Downloads 461927 Tracing the Developmental Repertoire of the Progressive: Evidence from L2 Construction Learning
Abstract:
Research investigating language acquisition from a constructionist perspective has demonstrated that language is learned as constructions at various linguistic levels, which is related to factors of frequency, semantic prototypicality, and form-meaning contingency. However, previous research on construction learning tended to focus on clause-level constructions such as verb argument constructions but few attempts were made to study morpheme-level constructions such as the progressive construction, which is regarded as a source of acquisition problems for English learners from diverse L1 backgrounds, especially for those whose L1 do not have an equivalent construction such as German and Chinese. To trace the developmental trajectory of Chinese EFL learners’ use of the progressive with respect to verb frequency, verb-progressive contingency, and verbal prototypicality and generality, a learner corpus consisting of three sub-corpora representing three different English proficiency levels was extracted from the Chinese Learners of English Corpora (CLEC). As the reference point, a native speakers’ corpus extracted from the Louvain Corpus of Native English Essays was also established. All the texts were annotated with C7 tagset by part-of-speech tagging software. After annotation all valid progressive hits were retrieved with AntConc 3.4.3 followed by a manual check. Frequency-related data showed that from the lowest to the highest proficiency level, (1) the type token ratio increased steadily from 23.5% to 35.6%, getting closer to 36.4% in the native speakers’ corpus, indicating a wider use of verbs in the progressive; (2) the normalized entropy value rose from 0.776 to 0.876, working towards the target score of 0.886 in native speakers’ corpus, revealing that upper-intermediate learners exhibited a more even distribution and more productive use of verbs in the progressive; (3) activity verbs (i.e., verbs with prototypical progressive meanings like running and singing) dropped from 59% to 34% but non-prototypical verbs such as state verbs (e.g., being and living) and achievement verbs (e.g., dying and finishing) were increasingly used in the progressive. Apart from raw frequency analyses, collostructional analyses were conducted to quantify verb-progressive contingency and to determine what verbs were distinctively associated with the progressive construction. Results were in line with raw frequency findings, which showed that contingency between the progressive and non-prototypical verbs represented by light verbs (e.g., going, doing, making, and coming) increased as English proficiency proceeded. These findings altogether suggested that beginning Chinese EFL learners were less productive in using the progressive construction: they were constrained by a small set of verbs which had concrete and typical progressive meanings (e.g., the activity verbs). But with English proficiency increasing, their use of the progressive began to spread to marginal members such as the light verbs.Keywords: Construction learning, Corpus-based, Progressives, Prototype
Procedia PDF Downloads 128926 Analyzing Transit Network Design versus Urban Dispersion
Authors: Hugo Badia
Abstract:
This research answers which is the most suitable transit network structure to serve specific demand requirements in an increasing urban dispersion process. Two main approaches of network design are found in the literature. On the one hand, a traditional answer, widespread in our cities, that develops a high number of lines to connect most of origin-destination pairs by direct trips; an approach based on the idea that users averse to transfers. On the other hand, some authors advocate an alternative design characterized by simple networks where transfer is essential to complete most of trips. To answer which of them is the best option, we use a two-step methodology. First, by means of an analytical model, three basic network structures are compared: a radial scheme, starting point for the other two structures, a direct trip-based network, and a transfer-based one, which represent the two alternative transit network designs. The model optimizes the network configuration with regard to the total cost for each structure. For a scenario of dispersion, the best alternative is the structure with the minimum cost. This dispersion degree is defined in a simple way considering that only a central area attracts all trips. If this area is small, we have a high concentrated mobility pattern; if this area is too large, the city is highly decentralized. In this first step, we can determine the area of applicability for each structure in function to that urban dispersion degree. The analytical results show that a radial structure is suitable when the demand is so centralized, however, when this demand starts to scatter, new transit lines should be implemented to avoid transfers. If the urban dispersion advances, the introduction of more lines is no longer a good alternative, in this case, the best solution is a change of structure, from direct trips to a network based on transfers. The area of applicability of each network strategy is not constant, it depends on the characteristics of demand, city and transport technology. In the second step, we translate analytical results to a real case study by the relationship between the parameters of dispersion of the model and direct measures of dispersion in a real city. Two dimensions of the urban sprawl process are considered: concentration, defined by Gini coefficient, and centralization by area based centralization index. Once it is estimated the real dispersion degree, we are able to identify in which area of applicability the city is located. In summary, from a strategic point of view, we can obtain with this methodology which is the best network design approach for a city, comparing the theoretical results with the real dispersion degree.Keywords: analytical network design model, network structure, public transport, urban dispersion
Procedia PDF Downloads 230925 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements
Authors: Alexander Buhr, Klaus Ehrenfried
Abstract:
Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.Keywords: boundary layer, high-speed PIV, ICE3, moving train model, roughness elements
Procedia PDF Downloads 305924 Automatic Aggregation and Embedding of Microservices for Optimized Deployments
Authors: Pablo Chico De Guzman, Cesar Sanchez
Abstract:
Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.Keywords: aggregation, deployment, embedding, resource allocation
Procedia PDF Downloads 203923 Conflicts of Interest in the Private Sector and the Significance of the Public Interest Test
Authors: Opemiposi Adegbulu
Abstract:
Conflicts of interest is an elusive, diverse and engaging subject, a cross-cutting problem of governance; all levels of governance, ranging from local to global, public to corporate or financial sectors. In all these areas, its mismanagement could lead to the distortion of decision-making processes, corrosion of trust and the weakening of administration. According to Professor Peters, an expert in the area, conflict of interest, a problem at the root of many scandals has “become a pervasive ethical concern in our professional, organisational, and political life”. Conflicts of interest corrode trust, and like in the public sector, trust is mandatory for the market, consumers/clients, shareholders and other stakeholders in the private sector. However, conflicts of interest in the private sector are distinct and must be treated in like manner when regulatory efforts are made to address them. The research looks at identifying conflicts of interest in the private sector and differentiating them from those in the public sector. The public interest is submitted as a criterion which allows for such differentiation. This is significant because it would for the use of tailor-made or sector-specific approaches to addressing this complex issue. This is conducted through extensive review of literature and theories on the definition of conflicts of interest. This study will employ theoretical, doctrinal and comparative methods. The nature of conflicts of interest in the private sector will be explored, through an analysis of the public sector where the notion of conflicts of interest appears more clearly identified, reasons, why they are of business ethics concern, will be advanced, and then, once again, looking at public sector solutions and other solutions, the study will identify ways of mitigating and managing conflicts in the private sector. An exploration of public sector conflicts of interest and solutions will be carried out because the typologies of conflicts of interest in both sectors appear very similar at the core and thus, lessons can be learnt with regards to the management of these issues in the private sector. Conflicts of interest corrode trust, and like in the public sector, trust is mandatory for the market, consumers/clients, shareholders and other stakeholders in the private sector. This research will then focus on some specific challenges to understanding and identifying conflicts of interest in the private sector; origin, diverging theories, the psychological barrier to the definition, similarities with public sector conflicts of interest due to the notions of corrosion of trust, ‘being in a particular kind of situation,’ etc. The notion of public interest will be submitted as a key element at the heart of the distinction between public sector and private sector conflicts of interests. It will then be proposed that the appreciation of the notion of conflicts of interest differ according to sector, country to country, based on the public interest test, using the United Kingdom (UK), the United States of America (US), France and the Philippines as illustrations.Keywords: conflicts of interest, corporate governance, global governance, public interest
Procedia PDF Downloads 401922 Long-Term Variabilities and Tendencies in the Zonally Averaged TIMED-SABER Ozone and Temperature in the Middle Atmosphere over 10°N-15°N
Authors: Oindrila Nath, S. Sridharan
Abstract:
Long-term (2002-2012) temperature and ozone measurements by Sounding of Atmosphere by Broadband Emission Radiometry (SABER) instrument onboard Thermosphere, Ionosphere, Mesosphere Energetics and Dynamics (TIMED) satellite zonally averaged over 10°N-15°N are used to study their long-term changes and their responses to solar cycle, quasi-biennial oscillation and El Nino Southern Oscillation. The region is selected to provide more accurate long-term trends and variabilities, which were not possible earlier with lidar measurements over Gadanki (13.5°N, 79.2°E), which are limited to cloud-free nights, whereas continuous data sets of SABER temperature and ozone are available. Regression analysis of temperature shows a cooling trend of 0.5K/decade in the stratosphere and that of 3K/decade in the mesosphere. Ozone shows a statistically significant decreasing trend of 1.3 ppmv per decade in the mesosphere although there is a small positive trend in stratosphere at 25 km. Other than this no significant ozone trend is observed in stratosphere. Negative ozone-QBO response (0.02ppmv/QBO), positive ozone-solar cycle (0.91ppmv/100SFU) and negative response to ENSO (0.51ppmv/SOI) have been found more in mesosphere whereas positive ozone response to ENSO (0.23ppmv/SOI) is pronounced in stratosphere (20-30 km). The temperature response to solar cycle is more positive (3.74K/100SFU) in the upper mesosphere and its response to ENSO is negative around 80 km and positive around 90-100 km and its response to QBO is insignificant at most of the heights. Composite monthly mean of ozone volume mixing ratio shows maximum values during pre-monsoon and post-monsoon season in middle stratosphere (25-30 km) and in upper mesosphere (85-95 km) around 10 ppmv. Composite monthly mean of temperature shows semi-annual variation with large values (~250-260 K) in equinox months and less values in solstice months in upper stratosphere and lower mesosphere (40-55 km) whereas the SAO becomes weaker above 55 km. The semi-annual variation again appears at 80-90 km, with large values in spring equinox and winter months. In the upper mesosphere (90-100 km), less temperature (~170-190 K) prevails in all the months except during September, when the temperature is slightly more. The height profiles of amplitudes of semi-annual and annual oscillations in ozone show maximum values of 6 ppmv and 2.5 ppmv respectively in upper mesosphere (80-100 km), whereas SAO and AO in temperature show maximum values of 5.8 K and 4.6 K in lower and middle mesosphere around 60-85 km. The phase profiles of both SAO and AO show downward progressions. These results are being compared with long-term lidar temperature measurements over Gadanki (13.5°N, 79.2°E) and the results obtained will be presented during the meeting.Keywords: trends, QBO, solar cycle, ENSO, ozone, temperature
Procedia PDF Downloads 410921 Testing the Impact of the Nature of Services Offered on Travel Sites and Links on Traffic Generated: A Longitudinal Survey
Authors: Rania S. Hussein
Abstract:
Background: This study aims to determine the evolution of service provision by Egyptian travel sites and how these services change in terms of their level of sophistication over the period of the study which is ten years. To the author’s best knowledge, this is the first longitudinal study that focuses on an extended time frame of ten years. Additionally, the study attempts to determine the popularity of these websites through the number of links to these sites. Links maybe viewed as the equivalent of a referral or word of mouth but in an online context. Both popularity and the nature of the services provided by these websites are used to determine the traffic on these sites. In examining the nature of services provided, the website itself is viewed as an overall service offering that is composed of different travel products and services. Method: This study uses content analysis in the form of a small scale survey done on 30 Egyptian travel agents’ websites to examine whether Egyptian travel websites are static or dynamic in terms of the services that they provide and whether they provide simple or sophisticated travel services. To determine the level of sophistication of these travel sites, the nature and composition of products and services offered by these sites were first examined. A framework adapted from Kotler (1997) 'Five levels of a product' was used. The target group for this study consists of companies that do inbound tourism. Four rounds of data collection were conducted over a period of 10 years. Two rounds of data collection were made in 2004 and two rounds were made in 2014. Data from the travel agents’ sites were collected over a two weeks period in each of the four rounds. Besides collecting data on features of websites, data was also collected on the popularity of these websites through a software program called Alexa that showed the traffic rank and number of links of each site. Regression analysis was used to test the effect of links and services on websites as independent variables on traffic as the dependent variable of this study. Findings: Results indicate that as companies moved from having simple websites with basic travel information to being more interactive, the number of visitors illustrated by traffic and the popularity of those sites increase as shown by the number of links. Results also show that travel companies use the web much more for promotion rather than for distribution since most travel agents are using it basically for information provision. The results of this content analysis study taps on an unexplored area and provide useful insights for marketers on how they can generate more traffic to their websites by focusing on developing a distinctive content on these sites and also by focusing on the visibility of their sites thus enhancing the popularity or links to their sites.Keywords: levels of a product, popularity, travel, website evolution
Procedia PDF Downloads 321920 Structural and Biochemical Characterization of Red and Green Emitting Luciferase Enzymes
Authors: Wael M. Rabeh, Cesar Carrasco-Lopez, Juliana C. Ferreira, Pance Naumov
Abstract:
Bioluminescence, the emission of light from a biological process, is found in various living organisms including bacteria, fireflies, beetles, fungus and different marine organisms. Luciferase is an enzyme that catalyzes a two steps oxidation of luciferin in the presence of Mg2+ and ATP to produce oxyluciferin and releases energy in the form of light. The luciferase assay is used in biological research and clinical applications for in vivo imaging, cell proliferation, and protein folding and secretion analysis. The luciferase enzyme consists of two domains, a large N-terminal domain (1-436 residues) that is connected to a small C-terminal domain (440-544) by a flexible loop that functions as a hinge for opening and closing the active site. The two domains are separated by a large cleft housing the active site that closes after binding the substrates, luciferin and ATP. Even though all insect luciferases catalyze the same chemical reaction and share 50% to 90% sequence homology and high structural similarity, they emit light of different colors from green at 560nm to red at 640 nm. Currently, the majority of the structural and biochemical studies have been conducted on green-emitting firefly luciferases. To address the color emission mechanism, we expressed and purified two luciferase enzymes with blue-shifted green and red emission from indigenous Brazilian species Amydetes fanestratus and Phrixothrix, respectively. The two enzymes naturally emit light of different colors and they are an excellent system to study the color-emission mechanism of luciferases, as the current proposed mechanisms are based on mutagenesis studies. Using a vapor-diffusion method and a high-throughput approach, we crystallized and solved the crystal structure of both enzymes, at 1.7 Å and 3.1 Å resolution respectively, using X-ray crystallography. The free enzyme adopted two open conformations in the crystallographic unit cell that are different from the previously characterized firefly luciferase. The blue-shifted green luciferase crystalized as a monomer similar to other luciferases reported in literature, while the red luciferases crystalized as an octamer and was also purified as an octomer in solution. The octomer conformation is the first of its kind for any insect’s luciferase, which might be relate to the red color emission. Structurally designed mutations confirmed the importance of the transition between the open and close conformations in the fine-tuning of the color and the characterization of other interesting mutants is underway.Keywords: bioluminescence, enzymology, structural biology, x-ray crystallography
Procedia PDF Downloads 326919 Separate Collection System of Recyclables and Biowaste Treatment and Utilization in Metropolitan Area Finland
Authors: Petri Kouvo, Aino Kainulainen, Kimmo Koivunen
Abstract:
Separate collection system for recyclable wastes in the Helsinki region was ranked second best of European capitals. The collection system includes paper, cardboard, glass, metals and biowaste. Residual waste is collected and used in energy production. The collection system excluding paper is managed by the Helsinki Region Environmental Services HSY, a public organization owned by four municipalities (Helsinki, Espoo, Kauniainen and Vantaa). Paper collection is handled by the producer responsibility scheme. The efficiency of the collection system in the Helsinki region relies on a good coverage of door-to-door-collection. All properties with 10 or more dwelling units are required to source separate biowaste and cardboard. This covers about 75% of the population of the area. The obligation is extended to glass and metal in properties with 20 or more dwelling units. Other success factors include public awareness campaigns and a fee system that encourages recycling. As a result of waste management regulations for source separation of recyclables and biowaste, nearly 50 percent of recycling rate of household waste has been reached. For households and small and medium size enterprises, there is a sorting station fleet of five stations available. More than 50 percent of wastes received at sorting stations is utilized as material. The separate collection of plastic packaging in Finland will begin in 2016 within the producer responsibility scheme. HSY started supplementing the national bring point system with door-to-door-collection and pilot operations will begin in spring 2016. The result of plastic packages pilot project has been encouraging. Until the end of 2016, over 3500 apartment buildings have been joined the piloting, and more than 1800 tons of plastic packages have been collected separately. In the summer 2015 a novel partial flow digestion process combining digestion and tunnel composting was adopted for source separated household and commercial biowaste management. The product gas form digestion process is converted in to heat and electricity in piston engine and organic Rankine cycle process with very high overall efficiency. This paper describes the efficient collection system and discusses key success factors as well as main obstacles and lessons learned as well as the partial flow process for biowaste management.Keywords: biowaste, HSY, MSW, plastic packages, recycling, separate collection
Procedia PDF Downloads 217918 Magnetic Navigation of Nanoparticles inside a 3D Carotid Model
Authors: E. G. Karvelas, C. Liosis, A. Theodorakakos, T. E. Karakasidis
Abstract:
Magnetic navigation of the drug inside the human vessels is a very important concept since the drug is delivered to the desired area. Consequently, the quantity of the drug required to reach therapeutic levels is being reduced while the drug concentration at targeted sites is increased. Magnetic navigation of drug agents can be achieved with the use of magnetic nanoparticles where anti-tumor agents are loaded on the surface of the nanoparticles. The magnetic field that is required to navigate the particles inside the human arteries is produced by a magnetic resonance imaging (MRI) device. The main factors which influence the efficiency of the usage of magnetic nanoparticles for biomedical applications in magnetic driving are the size and the magnetization of the biocompatible nanoparticles. In this study, a computational platform for the simulation of the optimal gradient magnetic fields for the navigation of magnetic nanoparticles inside a carotid artery is presented. For the propulsion model of the particles, seven major forces are considered, i.e., the magnetic force from MRIs main magnet static field as well as the magnetic field gradient force from the special propulsion gradient coils. The static field is responsible for the aggregation of nanoparticles, while the magnetic gradient contributes to the navigation of the agglomerates that are formed. Moreover, the contact forces among the aggregated nanoparticles and the wall and the Stokes drag force for each particle are considered, while only spherical particles are used in this study. In addition, gravitational forces due to gravity and the force due to buoyancy are included. Finally, Van der Walls force and Brownian motion are taken into account in the simulation. The OpenFoam platform is used for the calculation of the flow field and the uncoupled equations of particles' motion. To verify the optimal gradient magnetic fields, a covariance matrix adaptation evolution strategy (CMAES) is used in order to navigate the particles into the desired area. A desired trajectory is inserted into the computational geometry, which the particles are going to be navigated in. Initially, the CMAES optimization strategy provides the OpenFOAM program with random values of the gradient magnetic field. At the end of each simulation, the computational platform evaluates the distance between the particles and the desired trajectory. The present model can simulate the motion of particles when they are navigated by the magnetic field that is produced by the MRI device. Under the influence of fluid flow, the model investigates the effect of different gradient magnetic fields in order to minimize the distance of particles from the desired trajectory. In addition, the platform can navigate the particles into the desired trajectory with an efficiency between 80-90%. On the other hand, a small number of particles are stuck to the walls and remains there for the rest of the simulation.Keywords: artery, drug, nanoparticles, navigation
Procedia PDF Downloads 107917 Spatial Ecology of an Endangered Amphibian Litoria Raniformis within Modified Tasmanian Landscapes
Authors: Timothy Garvey, Don Driscoll
Abstract:
Within Tasmania, the growling grass frog (Litoria raniformis) has experienced a rapid contraction in distribution. This decline is primarily attributed to habitat loss through landscape modification and improved land drainage. Reductions in seasonal water-sources have placed increasing importance on permanent water bodies for reproduction and foraging. Tasmanian agricultural and commercial forestry landscapes often feature small artificial ponds, utilized for watering livestock and fighting wildfires. Improved knowledge of how L. raniformis may be exploiting anthropogenic ponds is required for improved conservation management. We implemented telemetric tracking in order to evaluate the spatial ecology of L. raniformis (n = 20) within agricultural and managed forestry sites, with tracking conducted periodically over the breeding season (November/December, January/February, March/April). We investigated (1) potential differences in habitat utilization between agricultural and plantation sites, and (2) the post-breeding dispersal of individual frogs. Frogs were found to remain in close proximity to ponds throughout November/December, with individuals occupying vegetative depauperate water bodies beginning to disperse by January/February. Dispersing individuals traversed exposed plantation understory and agricultural pasture land in order to enter patches of native scrubland. By March/April all individuals captured at minimally vegetated ponds had retreated to adjacent scrub corridors. Animals found in ponds featuring dense riparian vegetation were not recorded to disperse. No difference in behavior was recorded between sexes. Rising temperatures coincided with increased movement by individuals towards native scrub refugia. The patterns of movement reported in this investigation emphasize the significant contribution of manmade water-bodies towards the conservation of L. raniformis within modified landscapes. The use of natural scrubland as cyclical retreats between breeding seasons also highlights the importance of the continued preservation of remnant vegetation corridors. Loss of artificial dams or buffering scrubland in heavily altered landscapes could see the breakdown of the greater L. raniformis meta-population further threatening their regional persistence.Keywords: habitat loss, modified landscapes, spatial ecology, telemetry
Procedia PDF Downloads 117916 Analysis of Reduced Mechanisms for Premixed Combustion of Methane/Hydrogen/Propane/Air Flames in Geometrically Modified Combustor and Its Effects on Flame Properties
Authors: E. Salem
Abstract:
Combustion has been used for a long time as a means of energy extraction. However, in recent years, there has been a further increase in air pollution, through pollutants such as nitrogen oxides, acid etc. In order to solve this problem, there is a need to reduce carbon and nitrogen oxides through learn burning modifying combustors and fuel dilution. A numerical investigation has been done to investigate the effectiveness of several reduced mechanisms in terms of computational time and accuracy, for the combustion of the hydrocarbons/air or diluted with hydrogen in a micro combustor. The simulations were carried out using the ANSYS Fluent 19.1. To validate the results “PREMIX and CHEMKIN” codes were used to calculate 1D premixed flame based on the temperature, composition of burned and unburned gas mixtures. Numerical calculations were carried for several hydrocarbons by changing the equivalence ratios and adding small amounts of hydrogen into the fuel blends then analyzing the flammable limit, the reduction in NOx and CO emissions, then comparing it to experimental data. By solving the conservations equations, several global reduced mechanisms (2-9-12) were obtained. These reduced mechanisms were simulated on a 2D cylindrical tube with dimensions of 40 cm in length and 2.5 cm diameter. The mesh of the model included a proper fine quad mesh, within the first 7 cm of the tube and around the walls. By developing a proper boundary layer, several simulations were performed on hydrocarbon/air blends to visualize the flame characteristics than were compared with experimental data. Once the results were within acceptable range, the geometry of the combustor was modified through changing the length, diameter, adding hydrogen by volume, and changing the equivalence ratios from lean to rich in the fuel blends, the results on flame temperature, shape, velocity and concentrations of radicals and emissions were observed. It was determined that the reduced mechanisms provided results within an acceptable range. The variation of the inlet velocity and geometry of the tube lead to an increase of the temperature and CO2 emissions, highest temperatures were obtained in lean conditions (0.5-0.9) equivalence ratio. Addition of hydrogen blends into combustor fuel blends resulted in; reduction in CO and NOx emissions, expansion of the flammable limit, under the condition of having same laminar flow, and varying equivalence ratio with hydrogen additions. The production of NO is reduced because the combustion happens in a leaner state and helps in solving environmental problems.Keywords: combustor, equivalence-ratio, hydrogenation, premixed flames
Procedia PDF Downloads 114915 Characteristics of the Rocks Glacier Deposits in the Southern Carpathians, Romania
Authors: Petru Urdea
Abstract:
As a distinct part of the mountain system, the rock glacier system is a particularly periglacial debris system. Being an open system, it works in a manner of interconnection with others subsystems like glacial, cliffs, rocky slopes sand talus slope subsystems, which are sources of sediments. One characteristic is that for long periods of time it is like a storage unit for debris, and ice, and temporary for snow and water. In the Southern Carpathians 306 rock glaciers were identified. The vast majority of these rock glaciers, are talus rock glaciers, 74%, and 26%, are debris rock glaciers. In the area occupied by granites and granodiorites are present, 49% of all the rock glaciers, representing 61% of the area occupied by Southern Carpathians rock glaciers. This lithological dependence also leaves its mark on the specifics of the deposits, everything bearing the imprint of the particular way the rocks respond to the physical weathering processes, all in a periglacial regime. If in the domain of granites and granodiorites the blocks are large, - of metric order, even 10 m3 - , in the domain of the metamorphic rocks only gneisses can cut similar sizes. Amphibolites, amphibolitic schists, micaschists, sericite-chlorite schists and phyllites crop out in much smaller blocks, of decimetric order, mostly in the form of slabs. In the case of rock glaciers made up of large blocks, with a strcture of open-works type, the density and volume of voids between the blocks is greater, the smaller debris generating more compact structures with fewer voids. All these influences the thermal regime, associated with a certain type of air circulation during the seasons and the emergence of permafrost formation conditions. The rock glaciers are fed by rock falls, rock avalanches, debris flows, avalanches, so that the structure is heterogeneous, which is also reflected in the detailed topography of the rock glaciers. This heterogeneity is also influenced by the spatial assembly of the rock bodies in the supply area and, an element that cannot be omitted, the behavior of the rocks during periglacial weathering. The production of small gelifracts determines the filling of voids and the appearance of more compact structures, with effects on the creep process. In general, surface deposits are coarser, those in depth are finer, their characteristics being detectable by applying geophysical methods. The electrical tomography (ERT) and georadar (GPR) investigations carried out in the Făgăraş Mountains, Retezat and the Parâng Mountains, each with a different lithological specificity, allowed the identification of some differentiations, including the presence of permafrost bodies.Keywords: rock glaciers deposits, structure, lithology, permafrost, Southern Carpathians, Romania
Procedia PDF Downloads 26914 Developing a Framework for Assessing and Fostering the Sustainability of Manufacturing Companies
Authors: Ilaria Barletta, Mahesh Mani, Björn Johansson
Abstract:
The concept of sustainability encompasses economic, environmental, social and institutional considerations. Sustainable manufacturing (SM) is, therefore, a multi-faceted concept. It broadly implies the development and implementation of technologies, projects and initiatives that are concerned with the life cycle of products and services, and are able to bring positive impacts to the environment, company stakeholders and profitability. Because of this, achieving SM-related goals requires a holistic, life-cycle-thinking approach from manufacturing companies. Further, such an approach must rely on a logic of continuous improvement and ease of implementation in order to be effective. Currently, there exists in the academic literature no comprehensively structured frameworks that support manufacturing companies in the identification of the issues and the capabilities that can either hinder or foster sustainability. This scarcity of support extends to difficulties in obtaining quantifiable measurements in order to objectively evaluate solutions and programs and identify improvement areas within SM for standards conformance. To bridge this gap, this paper proposes the concept of a framework for assessing and continuously improving the sustainability of manufacturing companies. The framework addresses strategies and projects for SM and operates in three sequential phases: analysis of the issues, design of solutions and continuous improvement. A set of interviews, observations and questionnaires are the research methods to be used for the implementation of the framework. Different decision-support methods - either already-existing or novel ones - can be 'plugged into' each of the phases. These methods can assess anything from business capabilities to process maturity. In particular, the authors are working on the development of a sustainable manufacturing maturity model (SMMM) as decision support within the phase of 'continuous improvement'. The SMMM, inspired by previous maturity models, is made up of four maturity levels stemming from 'non-existing' to 'thriving'. Aggregate findings from the use of the framework should ultimately reveal to managers and CEOs the roadmap for achieving SM goals and identify the maturity of their companies’ processes and capabilities. Two cases from two manufacturing companies in Australia are currently being employed to develop and test the framework. The use of this framework will bring two main benefits: enable visual, intuitive internal sustainability benchmarking and raise awareness of improvement areas that lead companies towards an increasingly developed SM.Keywords: life cycle management, continuous improvement, maturity model, sustainable manufacturing
Procedia PDF Downloads 266913 Preparation of β-Polyvinylidene Fluoride Film for Self-Charging Lithium-Ion Battery
Authors: Nursultan Turdakyn, Alisher Medeubayev, Didar Meiramov, Zhibek Bekezhankyzy, Desmond Adair, Gulnur Kalimuldina
Abstract:
In recent years the development of sustainable energy sources is getting extensive research interest due to the ever-growing demand for energy. As an alternative energy source to power small electronic devices, ambient energy harvesting from vibration or human body motion is considered a potential candidate. Despite the enormous progress in the field of battery research in terms of safety, lifecycle and energy density in about three decades, it has not reached the level to conveniently power wearable electronic devices such as smartwatches, bands, hearing aids, etc. For this reason, the development of self-charging power units with excellent flexibility and integrated energy harvesting and storage is crucial. Self-powering is a key idea that makes it possible for the system to operate sustainably, which is now getting more acceptance in many fields in the area of sensor networks, the internet of things (IoT) and implantable in-vivo medical devices. For solving this energy harvesting issue, the self-powering nanogenerators (NGS) were proposed and proved their high effectiveness. Usually, sustainable power is delivered through energy harvesting and storage devices by connecting them to the power management circuit; as for energy storage, the Li-ion battery (LIB) is one of the most effective technologies. Through the movement of Li ions under the driving of an externally applied voltage source, the electrochemical reactions generate the anode and cathode, storing the electrical energy as the chemical energy. In this paper, we present a simultaneous process of converting the mechanical energy into chemical energy in a way that NG and LIB are combined as an all-in-one power system. The electrospinning method was used as an initial step for the development of such a system with a β-PVDF separator. The obtained film showed promising voltage output at different stress frequencies. X-ray diffraction (XRD) and Fourier Transform Infrared Spectroscopy (FT-IR) analysis showed a high percentage of β phase of PVDF polymer material. Moreover, it was found that the addition of 1 wt.% of BTO (Barium Titanate) results in higher quality fibers. When comparing pure PVDF solution with 20 wt.% content and the one with BTO added the latter was more viscous. Hence, the sample was electrospun uniformly without any beads. Lastly, to test the sensor application of such film, a particular testing device has been developed. With this device, the force of a finger tap can be applied at different frequencies so that electrical signal generation is validated.Keywords: electrospinning, nanogenerators, piezoelectric PVDF, self-charging li-ion batteries
Procedia PDF Downloads 162912 Journal Bearing with Controllable Radial Clearance, Design and Analysis
Authors: Majid Rashidi, Shahrbanoo Farkhondeh Biabnavi
Abstract:
The hydrodynamic instability phenomenon in a journal bearing may occur by either a reduction in the load carried by journal bearing, by an increase in the journal speed, by change in the lubricant viscosity, or a combination of these factors. The previous research and development work done to overcome the instability issue of journal bearings, operating in hydrodynamic lubricate regime, can be categorized as follows: A) Actively controlling the bearing sleeve by using piezo actuator, b) Inclusion of strategically located and shaped internal grooves within inner surface of the bearing sleeve, c) Actively controlling the bearing sleeve using an electromagnetic actuator, d)Actively and externally pressurizing the lubricant within a journal bearing set, and e)Incorporating tilting pads within the inner surface of the bearing sleeve that assume different equilibrium angular position in response to changes in the bearing design parameter such as speed and load. This work presents an innovative design concept for a 'smart journal bearing' set to operate in a stable hydrodynamic lubrication regime, despite variations in bearing speed, load, and its lubricant viscosity. The proposed bearing design allows adjusting its radial clearance for an attempt to maintain a stable bearing operation under those conditions that may cause instability for a bearing with a fixed radial clearance. The design concept allows adjusting the radial clearance at small increments in the order of 0.00254 mm. This is achieved by axially moving two symmetric conical rigid cavities that are in close contact with the conically shaped outer shell of a sleeve bearing. The proposed work includes a 3D model of the bearing that depicts the structural interactions of the bearing components. The 3D model is employed to conduct finite element Analyses to simulate the mechanical behavior of the bearing from a structural point of view. The concept of controlling of the radial clearance, as presented in this work, is original and has not been proposed and discuss in previous research. A typical journal bearing was analyzed under a set of design parameters, namely r =1.27 cm (journal radius), c = 0.0254 mm (radial clearance), L=1.27 cm (bearing length), w = 445N (bearing load), μ = 0.028 Pascale (lubricant viscosity). A shaft speed as 3600 r.p.m was considered, and the mass supported by the bearing, m, is set to be 4.38kg. The Summerfield Number associated with the above bearing design parameters turn to be, S=0.3. These combinations resulted in stable bearing operation. Subsequently, the speed was postulated to increase from 3600 r.p.mto 7200 r.p.m; the bearing was found to be unstable under the new increased speed. In order to regain stability, the radial clearance was increased from c = 0.0254 mm to0.0358mm. The change in the radial clearance was shown to bring the bearing back to stable an operating condition.Keywords: adjustable clearance, bearing, hydrodynamic, instability, journal
Procedia PDF Downloads 284911 The Influences of Facies and Fine Kaolinite Formation Migration on Sandstone's Reservoir Quality, Sarir Formation, Sirt Basin Libya
Authors: Faraj M. Elkhatri
Abstract:
The spatial and temporal distribution of diagenetic alterations related impact on the reservoir quality of the Sarir Formation. ( present day burial depth of about 9000 feet) Depositional facies and diagenetic alterations are the main controls on reservoir quality of Sarir Formation Sirt Basin Libya; these based on lithology and grain size as well as authigenic clay mineral types and their distributions. However, petrology investigation obtained on study area with five sandstone wells concentrated on main rock components and the parameters that may have impacts on reservoirs. the main authigenic clay minerals are kaolinite and dickite, these investigations have confirmed by X.R.D analysis and clay fraction. mainly Kaolinite and Dickite were extensively presented on all of wells with high amounts. As well as trace of detrital smectite and less amounts of illitized mud-matrix are possibly find by SEM image. Thin layers of clay presented as clay-grain coatings in local depth interpreted as remains of dissolved clay matrix is partly transformed into kaolinite adjacent and towards pore throat. This also may have impacts on most of the pore throats of this sandstone which are open and relatively clean with some fine martial have been formed on occluded pores. This material is identified by EDS analysis to be collections of not only kaolinite booklets but also small disaggregated kaolinite platelets derived from the disaggregation of larger kaolinite booklets. These patches of kaolinite not only fill this pore but also coat some of the surrounding framework grains. Quartz grains often enlarged by authigenic quartz overgrowths partially occlude and reduce porosity. Scanning Electron Microscopy with Energy Dispersive Spectroscopy (SEM) was conducted on the post-test samples to examine any mud filtrate particles that may be in the pore throats. Semi-qualitative elemental data on selected minerals observed during the SEM study were obtained through the use of an Energy Dispersive Spectroscopy (EDS) unit. The samples showed mostly clean open pore throats with limited occlusion by kaolinite. very fine-grained elemental combinations (Si/Al/Na/Cl, Si/Al Ca/Cl/Ti, and Qtz/Ti) have been identified and conformed by EDS analysis. However, the identification of the fine grained disaggregated material as mainly kaolinite though study area.Keywords: pore throat, fine migration, formation damage, solids plugging, porosity loss
Procedia PDF Downloads 153910 The Effect of Technology on Hospitality, Tourism Marketing and Management
Authors: Reda Moussa Massoud Embark
Abstract:
Tourism and hospitality graduate development is key to the future state of the tourism and hospitality industry. Meanwhile, Information and Communication Technology is increasingly becoming the engine for improving productivity and business opportunities in the travel and hospitality industry. Given the challenges and fierce global competition that have arisen in today's hospitality industry, it was important to shed light on strategic management. In addition, five-star hotels play a key role in supporting the tourism industry and investments in Egypt. Therefore, this study aims to examine the extent to which strategic management practices are implemented in five-star hotels in Egypt and to examine the differences between resort and inner-city hotels in terms of the implementation of strategic management processes. The influence of different hotel types on the implementation of the strategic management process is examined. A simple random sampling technique is used to select a sample of the target population, including hotels in the cities of Sharm el-Sheikh, Cairo and Hurghada. The data collection tool used in this study is an interviewer-administered questionnaire. Finally, combining the study results with the literature review allowed to present a set of recommendations to be addressed to hoteliers in the area of strategic management practices. Education and training in tourism and hospitality must take these changes into account in order to improve the ability of future managers to use a variety of tools and strategies to make their organizations more efficient and competitive. Therefore, this study aims to examine the types and effectiveness of training courses offered by tourism and hospitality departments in Egypt and to assess the importance of these training courses from the perspective of the graduate. The survey is aimed at graduates who have completed three different majors in the past decade: tourism and hospitality. Findings discussed the nature, level and effectiveness of the training provided at these faculties and the extent to which the training programs were valued by graduates working in different fields, and finally recommended specific practices to improve learning effectiveness increase and increase perceived employee benefits in the tourism and hospitality industry.Keywords: marketing channels, crisis, hotel, international, tour, operators, online travel agencies, e-tourism, hotel websites, tourism, web-tourism, strategic-management, strategic tools, five-star hotels, resorts, downtown hotels, Egyptian Markets.
Procedia PDF Downloads 66909 Annexing the Strength of Information and Communication Technology (ICT) for Real-time TB Reporting Using TB Situation Room (TSR) in Nigeria: Kano State Experience
Authors: Ibrahim Umar, Ashiru Rajab, Sumayya Chindo, Emmanuel Olashore
Abstract:
INTRODUCTION: Kano is the most populous state in Nigeria and one of the two states with the highest TB burden in the country. The state notifies an average of 8,000+ TB cases quarterly and has the highest yearly notification of all the states in Nigeria from 2020 to 2022. The contribution of the state TB program to the National TB notification varies from 9% to 10% quarterly between the first quarter of 2022 and second quarter of 2023. The Kano State TB Situation Room is an innovative platform for timely data collection, collation and analysis for informed decision in health system. During the 2023 second National TB Testing week (NTBTW) Kano TB program aimed at early TB detection, prevention and treatment. The state TB Situation room provided avenue to the state for coordination and surveillance through real time data reporting, review, analysis and use during the NTBTW. OBJECTIVES: To assess the role of innovative information and communication technology platform for real-time TB reporting during second National TB Testing week in Nigeria 2023. To showcase the NTBTW data cascade analysis using TSR as innovative ICT platform. METHODOLOGY: The State TB deployed a real-time virtual dashboard for NTBTW reporting, analysis and feedback. A data room team was set up who received realtime data using google link. Data received was analyzed using power BI analytic tool with statistical alpha level of significance of <0.05. RESULTS: At the end of the week-long activity and using the real-time dashboard with onsite mentorship of the field workers, the state TB program was able to screen a total of 52,054 people were screened for TB from 72,112 individuals eligible for screening (72% screening rate). A total of 9,910 presumptive TB clients were identified and evaluated for TB leading to diagnosis of 445 TB patients with TB (5% yield from presumptives) and placement of 435 TB patients on treatment (98% percentage enrolment). CONCLUSION: The TB Situation Room (TBSR) has been a great asset to Kano State TB Control Program in meeting up with the growing demand for timely data reporting in TB and other global health responses. The use of real time surveillance data during the 2023 NTBTW has in no small measure improved the TB response and feedback in Kano State. Scaling up this intervention to other disease areas, states and nations is a positive step in the right direction towards global TB eradication.Keywords: tuberculosis (tb), national tb testing week (ntbtw), tb situation rom (tsr), information communication technology (ict)
Procedia PDF Downloads 71908 Applicability and Reusability of Fly Ash and Base Treated Fly Ash for Adsorption of Catechol from Aqueous Solution: Equilibrium, Kinetics, Thermodynamics and Modeling
Authors: S. Agarwal, A. Rani
Abstract:
Catechol is a natural polyphenolic compound that widely exists in higher plants such as teas, vegetables, fruits, tobaccos, and some traditional Chinese medicines. The fly ash-based zeolites are capable of absorbing a wide range of pollutants. But the process of zeolite synthesis is time-consuming and requires technical setups by the industries. The marketed costs of zeolites are quite high restricting its use by small-scale industries for the removal of phenolic compounds. The present research proposes a simple method of alkaline treatment of FA to produce an effective adsorbent for catechol removal from wastewater. The experimental parameter such as pH, temperature, initial concentration and adsorbent dose on the removal of catechol were studied in batch reactor. For this purpose the adsorbent materials were mixed with aqueous solutions containing catechol ranging in 50 – 200 mg/L initial concentrations and then shaken continuously in a thermostatic Orbital Incubator Shaker at 30 ± 0.1 °C for 24 h. The samples were withdrawn from the shaker at predetermined time interval and separated by centrifugation (Centrifuge machine MBL-20) at 2000 rpm for 4 min. to yield a clear supernatant for analysis of the equilibrium concentrations of the solutes. The concentrations were measured with Double Beam UV/Visible spectrophotometer (model Spectrscan UV 2600/02) at the wavelength of 275 nm for catechol. In the present study, the use of low-cost adsorbent (BTFA) derived from coal fly ash (FA), has been investigated as a substitute of expensive methods for the sequestration of catechol. The FA and BTFA adsorbents were well characterized by XRF, FE-SEM with EDX, FTIR, and surface area and porosity measurement which proves the chemical constituents, functional groups and morphology of the adsorbents. The catechol adsorption capacities of synthesized BTFA and native material were determined. The adsorption was slightly increased with an increase in pH value. The monolayer adsorption capacities of FA and BTFA for catechol were 100 mg g⁻¹ and 333.33 mg g⁻¹ respectively, and maximum adsorption occurs within 60 minutes for both adsorbents used in this test. The equilibrium data are fitted by Freundlich isotherm found on the basis of error analysis (RMSE, SSE, and χ²). Adsorption was found to be spontaneous and exothermic on the basis of thermodynamic parameters (ΔG°, ΔS°, and ΔH°). Pseudo-second-order kinetic model better fitted the data for both FA and BTFA. BTFA showed large adsorptive characteristics, high separation selectivity, and excellent recyclability than FA. These findings indicate that BTFA could be employed as an effective and inexpensive adsorbent for the removal of catechol from wastewater.Keywords: catechol, fly ash, isotherms, kinetics, thermodynamic parameters
Procedia PDF Downloads 125907 Bringing the World to Net Zero Carbon Dioxide by Sequestering Biomass Carbon
Authors: Jeffrey A. Amelse
Abstract:
Many corporations aspire to become Net Zero Carbon Carbon Dioxide by 2035-2050. This paper examines what it will take to achieve those goals. Achieving Net Zero CO₂ requires an understanding of where energy is produced and consumed, the magnitude of CO₂ generation, and proper understanding of the Carbon Cycle. The latter leads to the distinction between CO₂ and biomass carbon sequestration. Short reviews are provided for prior technologies proposed for reducing CO₂ emissions from fossil fuels or substitution by renewable energy, to focus on their limitations and to show that none offer a complete solution. Of these, CO₂ sequestration is poised to have the largest impact. It will just cost money, scale-up is a huge challenge, and it will not be a complete solution. CO₂ sequestration is still in the demonstration and semi-commercial scale. Transportation accounts for only about 30% of total U.S. energy demand, and renewables account for only a small fraction of that sector. Yet, bioethanol production consumes 40% of U.S. corn crop, and biodiesel consumes 30% of U.S. soybeans. It is unrealistic to believe that biofuels can completely displace fossil fuels in the transportation market. Bioethanol is traced through its Carbon Cycle and shown to be both energy inefficient and inefficient use of biomass carbon. Both biofuels and CO₂ sequestration reduce future CO₂ emissions from continued use of fossil fuels. They will not remove CO₂ already in the atmosphere. Planting more trees has been proposed as a way to reduce atmospheric CO₂. Trees are a temporary solution. When they complete their Carbon Cycle, they die and release their carbon as CO₂ to the atmosphere. Thus, planting more trees is just 'kicking the can down the road.' The only way to permanently remove CO₂ already in the atmosphere is to break the Carbon Cycle by growing biomass from atmospheric CO₂ and sequestering biomass carbon. Sequestering tree leaves is proposed as a solution. Unlike wood, leaves have a short Carbon Cycle time constant. They renew and decompose every year. Allometric equations from the USDA indicate that theoretically, sequestrating only a fraction of the world’s tree leaves can get the world to Net Zero CO₂ without disturbing the underlying forests. How can tree leaves be permanently sequestered? It may be as simple as rethinking how landfills are designed to discourage instead of encouraging decomposition. In traditional landfills, municipal waste undergoes rapid initial aerobic decomposition to CO₂, followed by slow anaerobic decomposition to methane and CO₂. The latter can take hundreds to thousands of years. The first step in anaerobic decomposition is hydrolysis of cellulose to release sugars, which those who have worked on cellulosic ethanol know is challenging for a number of reasons. The key to permanent leaf sequestration may be keeping the landfills dry and exploiting known inhibitors for anaerobic bacteria.Keywords: carbon dioxide, net zero, sequestration, biomass, leaves
Procedia PDF Downloads 129906 System Transformation: Transitioning towards Low Carbon, Resource Efficient, and Circular Economy for Global Sustainability
Authors: Anthony Halog
Abstract:
In the coming decades the world that we know today will be drastically transformed. Population and economic growth, particularly in developing countries, are radically changing the demand for food and natural resources. Due to the transformations caused by these megatrends, especially economic growth which is rapidly expanding the middle class and changing consumption patterns worldwide, it is expected that this will result to an increase of approximately 40 percent in the demand for food, water, energy and other resources in the next decades. To fulfill this demand in a sustainable and efficient manner while avoiding food and water scarcity as well as environmental catastrophes in the near future, some industries, particularly the ones involved in food and energy production, have to drastically change its current production systems towards circular and green economy. In Australia, the agri-food industry has played a very important role in the scenario described above. It is one of the major food exporters in the world, supplying fast growing international markets in Asia and the Middle East. Though the Australian food supply chains are economically and technologically developed, it has been facing enduring challenges about its international competitiveness and environmental burdens caused by its production processes. An integrated framework for sustainability assessment is needed to precisely identify inefficiencies and environmental impacts created during food production processes. This research proposes a combination of industrial ecology and systems science based methods and tools intending to develop a novel and useful methodological framework for life cycle sustainability analysis of the agri-food industry. The presentation highlights circular economy paradigm aiming to implement sustainable industrial processes to transform the current industrial model of agri-food supply chains. The results are expected to support government policy makers, business decision makers and other stakeholders involved in agri-food-energy production system in pursuit of green and circular economy. The framework will assist future life cycle and integrated sustainability analysis and eco-redesign of food and other industrial systems.Keywords: circular economy, eco-efficiency, agri-food systems, green economy, life cycle sustainability assessment
Procedia PDF Downloads 281