Search results for: forming tools
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4728

Search results for: forming tools

1398 Effect of Education Based-on the Health Belief Model on Preventive Behaviors of Exposure to ‎Secondhand Smoke among Women

Authors: Arezoo Fallahi

Abstract:

Introduction: Exposure to second-hand smoke is an important global health problem and threatens the health of people, especially children and women. The aim of this study was to determine the effect of education based on the Health Belief Model on preventive behaviors of exposure to second-hand smoke in women. Materials and Methods: This experimental study was performed in 2022 in Sanandaj, west of Iran. Seventy-four people were selected by simple random sampling and divided into an intervention group (37 people) and a control group (37 people). Data collection tools included demographic characteristics and a second-hand smoke exposure questionnaire based on the Health Beliefs Model. The training in the intervention group was conducted in three one-hour sessions in the comprehensive health service centers in the form of lectures, pamphlets, and group discussions. Data were analyzed using SPSS software version 21 and statistical tests such as correlation, paired t-test, and independent t-test. Results: The intervention and control groups were homogeneous before education. They were similar in terms of mean scores of the Health Belief Model. However, after an educational intervention, some of the scores increased, including the mean perceived sensitivity score (from 17.62±2.86 to 19.75±1.23), perceived severity score (28.40±4.45 to 31.64±2), perceived benefits score (27.27±4.89 to 31.94±2.17), practice score (32.64±4.68 to 36.91±2.32) perceived barriers from 26.62±5.16 to 31.29±3.34, guide for external action (from 17.70±3.99 to 22/89 ±1.67), guide for internal action from (16.59±2.95 to 1.03±18.75), and self-efficacy (from 19.83 ±3.99 to 23.37±1.43) (P <0.05). Conclusion: The educational intervention designed based on the Health Belief Model in women was effective in performing preventive behaviors against exposure to second-hand smoke.

Keywords: education, women, exposure to secondhand smoke, health belief model

Procedia PDF Downloads 51
1397 Towards Resilient and Sustainable Integrated Agro-ecosystems Through Appropriate Climate-smart Farming Practices in Morocco Rainfed Agriculture

Authors: Abdelali Laamari, Morad Faiz, Ali Amamou And Mohamed Elkoudrim

Abstract:

This research seeks to develop multi-disciplinary, multi-criteria, and multi-institutional approaches that consider the three main pillars of sustainability (environmental, economic, and social aspects) at the level of decision making regarding the adoption of improved technologies in the targeted case study region in Morocco. The study is aimed at combining sound R&I with extensive skills in applied research and policy evaluation. The intention is to provide new simple, and transferable tools and agricultural practices that will enable the uptake of sustainability and the resiliency of agro-ecosystems. The study will understand the state-of-the-art of the impact of climate change and identify the core bottlenecks and climate change’s impact on crop and livestock productivity of the targeted value chains in Morocco. Studies conducted during 2021-2022 showed that most of the farmers are using since 2010 the direct seeding and the system can be improved by adopting new fertilizer and varieties of wheat. The alley-cropping technology is based on Atriplex plant or olive trees. The introduction of new varieties of oat and quinoa has improved biomass and grain production in a dry season. The research is targeting other issues, such as social enterprises, to diversify women’s income resources and create new job opportunities through diversification of end uses of durum wheat and barley grains. Women’s local knowledge is rich on the different end uses of durum and barley grains that can improve their added value if they are transformed as couscous, pasta, or any other products.

Keywords: agriculture, climate, production system, integration

Procedia PDF Downloads 63
1396 Bamboo: A Trendy and New Alternative to Wood

Authors: R. T. Aggangan, R. J. Cabangon

Abstract:

Bamboo is getting worldwide attention over the last 20 to 30 years due to numerous uses and it is regarded as the closest material that can be used as substitute to wood. In the domestic market, high quality bamboo products are sold in high-end markets while lower quality products are generally sold to medium and low income consumers. The global market in 2006 stands at about 7 billion US dollars and was projected to increase to US$ 17 B from 2015 to 2020. The Philippines had been actively producing and processing bamboo products for the furniture, handicrafts and construction industry. It was however in 2010 that the Philippine bamboo industry was formalized by virtue of Executive Order 879 that stated that the Philippine bamboo industry development is made a priority program of the government and created the Philippine Bamboo Industry Development Council (PBIDC) to provide the overall policy and program directions of the program for all stakeholders. At present, the most extensive use of bamboo is for the manufacture of engineered bamboo for school desks for all public schools as mandated by EO 879. Also, engineered bamboo products are used for high-end construction and furniture as well as for handicrafts. Development of cheap adhesives, preservatives, and finishing chemicals from local species of plants, development of economical methods of drying and preservation, product development and processing of lesser-used species of bamboo, development of processing tools, equipment and machineries are the strategies that will be employed to reduce the price and mainstream engineered bamboo products in the local and foreign market. In addition, processing wastes from bamboo can be recycled into fuel products such as charcoal are already in use. The more exciting possibility, however, is the production of bamboo pellets that can be used as a substitute for wood pellets for heating, cooking and generating electricity.

Keywords: bamboo charcoal and light distillates, engineered bamboo, furniture and handicraft industries, housing and construction, pellets

Procedia PDF Downloads 232
1395 Inter-Annual Variations of Sea Surface Temperature in the Arabian Sea

Authors: K. S. Sreejith, C. Shaji

Abstract:

Though both Arabian Sea and its counterpart Bay of Bengal is forced primarily by the semi-annually reversing monsoons, the spatio-temporal variations of surface waters is very strong in the Arabian Sea as compared to the Bay of Bengal. This study focuses on the inter-annual variability of Sea Surface Temperature (SST) in the Arabian Sea by analysing ERSST dataset which covers 152 years of SST (January 1854 to December 2002) based on the ICOADS in situ observations. To capture the dominant SST oscillations and to understand the inter-annual SST variations at various local regions of the Arabian Sea, wavelet analysis was performed on this long time-series SST dataset. This tool is advantageous over other signal analysing tools like Fourier analysis, based on the fact that it unfolds a time-series data (signal) both in frequency and time domain. This technique makes it easier to determine dominant modes of variability and explain how those modes vary in time. The analysis revealed that pentadal SST oscillations predominate at most of the analysed local regions in the Arabian Sea. From the time information of wavelet analysis, it was interpreted that these cold and warm events of large amplitude occurred during the periods 1870-1890, 1890-1910, 1930-1950, 1980-1990 and 1990-2005. SST oscillations with peaks having period of ~ 2-4 years was found to be significant in the central and eastern regions of Arabian Sea. This indicates that the inter-annual SST variation in the Indian Ocean is affected by the El Niño-Southern Oscillation (ENSO) and Indian Ocean Dipole (IOD) events.

Keywords: Arabian Sea, ICOADS, inter-annual variation, pentadal oscillation, SST, wavelet analysis

Procedia PDF Downloads 265
1394 Looking beyond Lynch's Image of a City

Authors: Sandhya Rao

Abstract:

Kevin Lynch’s Theory on Imeageability, let on explore a city in terms of five elements, Nodes, Paths, Edges, landmarks and Districts. What happens when we try to record the same data in an Indian context? What happens when we apply the same theory of Imageability to a complex shifting urban pattern of the Indian cities and how can we as Urban Designers demonstrate our role in the image building ordeal of these cities? The organizational patterns formed through mental images, of an Indian city is often diverse and intangible. It is also multi layered and temporary in terms of the spirit of the place. The pattern of images formed is loaded with associative meaning and intrinsically linked with the history and socio-cultural dominance of the place. The embedded memory of a place in one’s mind often plays an even more important role while formulating these images. Thus while deriving an image of a city one is often confused or finds the result chaotic. The images formed due to its complexity are further difficult to represent using a single medium. Under such a scenario it’s difficult to derive an output of an image constructed as well as make design interventions to enhance the legibility of a place. However, there can be a combination of tools and methods that allows one to record the key elements of a place through time, space and one’s user interface with the place. There has to be a clear understanding of the participant groups of a place and their time and period of engagement with the place as well. How we can translate the result obtained into a design intervention at the end, is the main of the research. Could a multi-faceted cognitive mapping be an answer to this or could it be a very transient mapping method which can change over time, place and person. How does the context influence the process of image building in one’s mind? These are the key questions that this research will aim to answer.

Keywords: imageability, organizational patterns, legibility, cognitive mapping

Procedia PDF Downloads 297
1393 Risk Tolerance in Youth With Emerging Mood Disorders

Authors: Ange Weinrabe, James Tran, Ian B. Hickie

Abstract:

Risk-taking behaviour is common during youth. In the time between adolescence and early adulthood, young people (aged 15-25 years) are more vulnerable to mood disorders, such as anxiety and depression. What impact does an emerging mood disorder have on decision-making in youth at critical decision points in their lives? In this article, we explore the impact of risk and ambiguity on youth decision-making in a clinical setting using a well-known economic experiment. At two time points, separated by six to eight weeks, we measured risky and ambiguous choices concurrently with findings from three psychological questionnaires, the 10-item Kessler Psychological Distress Scale (K10), the 17-item Quick Inventory of Depressive Symptomatology Adolescent Version (QIDS-A17), and the 12-item Somatic and Psychological Health Report (SPHERE-12), for young help seekers aged 16-25 (n=30, mean age 19.22 years, 19 males). When first arriving for care, we found that 50% (n=15) of participants experienced severe anxiety (K10 ≥ 30) and were severely depressed (QIDS-A17 ≥ 16). In Session 2, taking attrition rates into account (n=5), we found that 44% (n=11) remained severe across the full battery of questionnaires. When applying multiple regression analyses of the pooled sample of observations (N=55), across both sessions, we found that participants who rated severely anxious avoided making risky decisions. We suggest there is some statistically significant (although weak) (p=0.09) relation between risk and severe anxiety scores as measured by K10. Our findings may support working with novel tools with which to evaluate youth experiencing an emerging mood disorder and their cognitive capacities influencing decision-making.

Keywords: anxiety, decision-making, risk, adolescence

Procedia PDF Downloads 99
1392 A Case Study of Business Analytic Use in European Football: Analysis and Implications

Authors: M. C. Schloesser

Abstract:

The purpose of this paper is to explore the use and impact of business analytics in European football. Despite good evidence from other major sports leagues, research on this topic in Europe is currently very scarce. This research relies on expert interviews on the use and objective of business analytics. Along with revenue data over 16 seasons spanning from 2004/05 to 2019/20 from Manchester City FC, we conducted a time series analysis to detect a structural breakpoint on the different revenue streams, i.e., sponsorship and ticketing, after analytical tools have been implemented. We not only find that business analytics have indeed been applied at Manchester City FC and revenue increase is the main objective of their utilization but also that business analytics is indeed a good means to increase revenues if applied sufficiently. We can thereby support findings from other sports leagues. Consequently, professional sports organizations are advised to apply business analytics if they aim to increase revenues. This research has shown that analytical practices do, in fact, support revenue growth and help to work more efficiently. As the knowledge of analytical practices is very confidential and not publicly available, we had to select one club as a case study which can be considered a research limitation. Other practitioners should explore other clubs or leagues. Further, there are other factors that can lead to increased revenues that need to be considered. Additionally, sports organizations need resources to be able to apply and utilize business analytics. Consequently, findings might only apply to the top teams of the European football leagues. Nonetheless, this paper combines insights and results on usage, objectives, and impact of business analytics in European professional football and thereby fills a current research gap.

Keywords: business analytics, expert interviews, revenue management, time series analysis

Procedia PDF Downloads 58
1391 Using Statistical Significance and Prediction to Test Long/Short Term Public Services and Patients' Cohorts: A Case Study in Scotland

Authors: Raptis Sotirios

Abstract:

Health and social care (HSc) services planning and scheduling are facing unprecedented challenges due to the pandemic pressure and also suffer from unplanned spending that is negatively impacted by the global financial crisis. Data-driven can help to improve policies, plan and design services provision schedules using algorithms assist healthcare managers’ to face unexpected demands using fewer resources. The paper discusses services packing using statistical significance tests and machine learning (ML) to evaluate demands similarity and coupling. This is achieved by predicting the range of the demand (class) using ML methods such as CART, random forests (RF), and logistic regression (LGR). The significance tests Chi-Squared test and Student test are used on data over a 39 years span for which HSc services data exist for services delivered in Scotland. The demands are probabilistically associated through statistical hypotheses that assume that the target service’s demands are statistically dependent on other demands as a NULL hypothesis. This linkage can be confirmed or not by the data. Complementarily, ML methods are used to linearly predict the above target demands from the statistically found associations and extend the linear dependence of the target’s demand to independent demands forming, thus groups of services. Statistical tests confirm ML couplings making the prediction also statistically meaningful and prove that a target service can be matched reliably to other services, and ML shows these indicated relationships can also be linear ones. Zero paddings were used for missing years records and illustrated better such relationships both for limited years and in the entire span offering long term data visualizations while limited years groups explained how well patients numbers can be related in short periods or can change over time as opposed to behaviors across more years. The prediction performance of the associations is measured using Receiver Operating Characteristic(ROC) AUC and ACC metrics as well as the statistical tests, Chi-Squared and Student. Co-plots and comparison tables for RF, CART, and LGR as well as p-values and Information Exchange(IE), are provided showing the specific behavior of the ML and of the statistical tests and the behavior using different learning ratios. The impact of k-NN and cross-correlation and C-Means first groupings is also studied over limited years and the entire span. It was found that CART was generally behind RF and LGR, but in some interesting cases, LGR reached an AUC=0 falling below CART, while the ACC was as high as 0.912, showing that ML methods can be confused padding or by data irregularities or outliers. On average, 3 linear predictors were sufficient, LGR was found competing RF well, and CART followed with the same performance at higher learning ratios. Services were packed only if when significance level(p-value) of their association coefficient was more than 0.05. Social factors relationships were observed between home care services and treatment of old people, birth weights, alcoholism, drug abuse, and emergency admissions. The work found that different HSc services can be well packed as plans of limited years, across various services sectors, learning configurations, as confirmed using statistical hypotheses.

Keywords: class, cohorts, data frames, grouping, prediction, prob-ability, services

Procedia PDF Downloads 213
1390 Challenges That People with Autism and Caregivers Face in Public Environments

Authors: Andrei Pomana, Graham Brewer

Abstract:

Autism is a lifelong developmental disorder that affects verbal and non-verbal communication, behaviour and sensory processing. As a result, people on the autism spectrum have a difficult time when confronted with environments that have high levels of sensory stimulation. This is often compounded by the inability to properly communicate their wants and needs to caregivers. The capacity for people with autism to integrate depends on their ability to at least tolerate highly stimulating public environments for short periods of time. The overall challenges that people on the spectrum and their caregivers face need to be established in order to properly create and assess methods to mitigate the effects of high stimulus public spaces. The paper aims to identify the challenges that people on the autism spectrum and their caregivers face in typical public environments. Nine experienced autism therapists have participated in a semi-structured interview regarding the challenges that people with autism and their caregivers face in public environments. The qualitative data shows that the unpredictability of events and the high sensory stimulation present in public environments, especially auditory, are the two biggest contributors to the difficulties that people on the spectrum face. If the stimuli are not removed in a short period of time, uncontrollable behaviours or 'meltdowns' can occur, which leave the person incapacitated and unable to respond to any outside input. Possible solutions to increase integration in public spaces for people with autism revolve around removing unwanted sensory stimulus, creating personalized barriers for certain stimuli, equipping people with autism with better tools to communicate their needs or to orient themselves to a safe location and providing a predictable pattern of events that would prepare individuals for tasks ahead of time.

Keywords: autism, built environment, meltdown, public environment, sensory processing disorders

Procedia PDF Downloads 140
1389 Early Detection of Neuropathy in Leprosy-Comparing Clinical Tests with Nerve Conduction Study

Authors: Suchana Marahatta, Sabina Bhattarai, Bishnu Hari Paudel, Dilip Thakur

Abstract:

Background: Every year thousands of patients develop nerve damage and disabilities as a result of leprosy which can be prevented by early detection and treatment. So, early detection and treatment of nerve function impairment is of paramount importance in leprosy. Objectives: To assess the electrophysiological pattern of the peripheral nerves in leprosy patients and to compare it with clinical assessment tools. Materials and Methods: In this comparative cross-sectional study, 74 newly diagnosed leprosy patients without reaction were enrolled. They underwent thorough evaluation for peripheral nerve function impairment using clinical tests [i.e. nerve palpation (NP), monofilament (MF) testing, voluntary muscle testing (VMT)] and nerve conduction study (NCS). Clinical findings were compared with that of NCS using SPSS version 11.5. Results: NCS was impaired in 43.24% of leprosy patient at the baseline. Among them, sensory NCS was impaired in more patients (32.4%) in comparison to motor NCS (20.3%). NP, MF, and VMT were impaired in 58.1%, 25.7%, and 9.4% of the patients, respectively. Maximum concordance of monofilament testing and sensory NCS was found for sural nerve (14.7%). Likewise, the concordance of motor NP and motor NCS was the maximum for ulnar nerve (14.9%). When individual parameters of the NCS were considered, amplitude was found to be the most frequently affected parameter for both sensory and motor NCS. It was impaired in 100% of cases with abnormal NCS findings. Conclusion: Since there was no acceptable concordance between NCS findings and clinical findings, we should consider NCS whenever feasible for early detection of neuropathy in leprosy. The amplitude of both sensory nerve action potential (SNAP) and compound nerve action potential (CAMP) could be important determinants of the abnormal NCS if supported by further studies.

Keywords: leprosy, nerve function impairment, neuropathy, nerve conduction study

Procedia PDF Downloads 305
1388 Mnemotopic Perspectives: Communication Design as Stabilizer for the Memory of Places

Authors: C. Galasso

Abstract:

The ancestral relationship between humans and geographical environment has long been at the center of an interdisciplinary dialogue, which sees one of its main research nodes in the relationship between memory and places. Given its deep complexity, this symbiotic connection continues to look for a proper definition that appears increasingly negotiated by different disciplines. Numerous fields of knowledge are involved, from anthropology to semiotics of space, from photography to architecture, up to subjects traditionally far from these reasonings. This is the case of Design of Communication, a young discipline, now confident in itself and its objectives, aimed at finding and investigating original forms of visualization and representation, between sedimented knowledge and new technologies. In particular, Design of Communication for the Territory offers an alternative perspective to the debate, encouraging the reactivation and reconstruction of the memory of places. Recognizing mnemotopes as a cultural object of vertical interpretation of the memory-place relationship, design can become a real mediator of the territorial fixation of memories, making them increasingly accessible and perceptible, contributing to build a topography of memory. According to a mnemotopic vision, Communication Design can support the passage from a memory in which the observer participates only as an individual to a collective form of memory. A mnemotopic form of Communication Design can, through geolocation and content map-based systems, make chronology a topography rooted in the territory and practicable; it can be useful to understand how the perception of the memory of places changes over time, considering how to insert them in the contemporary world. Mnemotopes can be materialized in different format of translation, editing and narration and then involved in complex systems of communication. The memory of places, therefore, if stabilized by the tools offered by Communication Design, can make visible ruins and territorial stratifications, illuminating them with new communicative interests that can be shared and participated.

Keywords: memory of places, design of communication, territory, mnemotope, topography of memory

Procedia PDF Downloads 120
1387 Challenges and Future Prospects of Teaching English in Secondary Schools of Jharkhand Board: An Extensive Survey of the Present Status

Authors: Neha Toppo

Abstract:

Plans and programs for successful secondary education are incomplete without the inclusion of teaching English as an important area. Even after sixteen years of the formation of Jharkhand as a separate state, the students are still struggling to achieve quality education of English. This paper intends to account the present condition of teaching English in Jharkhand board secondary level schools through discussion on various issues of English language teaching, language need and learning challenges of its students. The study is to analyze whether the learning environment, teaching methods and materials, teaching resources, goals of language curriculum are appropriately convincing for the students of the board or require to be reanalyzed and also to provide appropriate suggestions for improvement. Immediate attention must be drawn towards the problem for benefitting those students, who despite their knowledge and talent are lagging behind in numerous fields only due to the lack of proficiency in English. The data and discussion provided are on the basis of a survey, in which semi structured interview with teachers, students and administrators in several schools including both rural and urban area has been taken. Questionnaire, observation and testing were used as important tools. The survey has been conducted in Ranchi district, as it covers large geographical area which includes number of villages and at the same time several towns. The district primarily possesses tribes as well as different class of people including immigrants from all over and outside Jharkhand with their social, economical strata. The observation makes it clear that the English language teaching at the state board is not complementing its context and the whole language teaching system should be re-examined to establish learner oriented environment.

Keywords: material, method, secondary level, teaching resources

Procedia PDF Downloads 548
1386 Correlation Results Based on Magnetic Susceptibility Measurements by in-situ and Ex-Situ Measurements as Indicators of Environmental Changes Due to the Fertilizer Industry

Authors: Nurin Amalina Widityani, Adinda Syifa Azhari, Twin Aji Kusumagiani, Eleonora Agustine

Abstract:

Fertilizer industry activities contribute to environmental changes. Changes to the environment became one of a few problems in this era of globalization. Parameters that can be seen as criteria to identify changes in the environment can be seen from the aspects of physics, chemistry, and biology. One aspect that can be assessed quickly and efficiently to describe environmental change is the aspect of physics, one of which is the value of magnetic susceptibility (χ). The rock magnetism method can be used as a proxy indicator of environmental changes, seen from the value of magnetic susceptibility. The rock magnetism method is based on magnetic susceptibility studies to measure and classify the degree of pollutant elements that cause changes in the environment. This research was conducted in the area around the fertilizer plant, with five coring points on each track, each coring point a depth of 15 cm. Magnetic susceptibility measurements were performed by in-situ and ex-situ. In-situ measurements were carried out directly by using the SM30 tool by putting the tools on the soil surface at each measurement point and by that obtaining the value of the magnetic susceptibility. Meanwhile, ex-situ measurements are performed in the laboratory by using the Bartington MS2B tool’s susceptibility, which is done on a coring sample which is taken every 5 cm. In-situ measurement shows results that the value of magnetic susceptibility at the surface varies, with the lowest score on the second and fifth points with the -0.81 value and the highest value at the third point, with the score of 0,345. Ex-situ measurements can find out the variations of magnetic susceptibility values at each depth point of coring. At a depth of 0-5 cm, the value of the highest XLF = 494.8 (x10-8m³/kg) is at the third point, while the value of the lowest XLF = 187.1 (x10-8m³/kg) at first. At a depth of 6-10 cm, the highest value of the XLF was at the second point, which was 832.7 (x10-8m³/kg) while the lowest XLF is at the first point, at 211 (x10-8m³/kg). At a depth of 11-15 cm, the XLF’s highest value = 857.7 (x10-8m³/kg) is at the second point, whereas the value of the lowest XLF = 83.3 (x10-8m³/kg) is at the fifth point. Based on the in situ and exsit measurements, it can be seen that the highest magnetic susceptibility values from the surface samples are at the third point.

Keywords: magnetic susceptibility, fertilizer plant, Bartington MS2B, SM30

Procedia PDF Downloads 328
1385 Integration of Technology through Instructional Systems Design

Authors: C. Salis, D. Zedda, M. F. Wilson

Abstract:

The IDEA project was conceived for teachers who are interested in enhancing their capacity to effectively implement the use of specific technologies in their teaching practice. Participating teachers are coached and supported as they explore technologies applied to the educational context. They access tools such as the technological platform developed by our team. Among the platform functionalities, teachers access an instructional systems design (ISD) tool (learning designer) that was adapted to the needs of our project. The tool is accessible from computers or mobile devices and used in association with other technologies to create new, meaningful learning environments. The objective of an instructional systems design is to guarantee the quality and effectiveness of education and to enhance learning. This goal involves both teachers who want to become more efficient in transferring knowledge or skills and students as the final recipient of their teaching. The use of Blooms’s taxonomy enables teachers to classify the learning objectives into levels of complexity and specificity, thus making it possible to highlight the kind of knowledge teachers would like their students to reach. The fact that the instructional design features can be visualized through the IDEA platform is a guarantee for those who are looking for specific educational materials to be used in their lessons. Despite the benefits offered, a number of teachers are reluctant to use ISD because the preparatory work of having to thoroughly analyze the teaching/learning objectives, the planning of learning material, assessment activities, etc., is long and felt to be time-consuming. This drawback is minimized using a learning designer, as the tool facilitates to reuse of the didactic contents having a clear view of the processes of analysis, planning, and production of educational or testing materials uploaded on our platform. In this paper, we shall present the feedback of the teachers who used our tool in their didactic.

Keywords: educational benefits, educational quality, educational technology, ISD tool

Procedia PDF Downloads 174
1384 Fuzzy Climate Control System for Hydroponic Green Forage Production

Authors: Germán Díaz Flórez, Carlos Alberto Olvera Olvera, Domingo José Gómez Meléndez, Francisco Eneldo López Monteagudo

Abstract:

In recent decades, population growth has exerted great pressure on natural resources. Two of the most scarce and difficult to obtain resources, arable land, and water, are closely interrelated, to the satisfaction of the demand for food production. In Mexico, the agricultural sector uses more than 70% of water consumption. Therefore, maximize the efficiency of current production systems is inescapable. It is essential to utilize techniques and tools that will enable us to the significant savings of water, labor and fertilizer. In this study, we present a production module of hydroponic green forage (HGF), which is a viable alternative in the production of livestock feed in the semi-arid and arid zones. The equipment in addition to having a forage production module, has a climate and irrigation control system that operated with photovoltaics. The climate control, irrigation and power management is based on fuzzy control techniques. The fuzzy control provides an accurate method in the design of controllers for nonlinear dynamic physical phenomena such as temperature and humidity, besides other as lighting level, aeration and irrigation control using heuristic information. In this working, firstly refers to the production of the hydroponic green forage, suitable weather conditions and fertigation subsequently presents the design of the production module and the design of the controller. A simulation of the behavior of the production module and the end results of actual operation of the equipment are presented, demonstrating its easy design, flexibility, robustness and low cost that represents this equipment in the primary sector.

Keywords: fuzzy, climate control system, hydroponic green forage, forage production module

Procedia PDF Downloads 384
1383 The Impact of an Improved Strategic Partnership Programme on Organisational Performance and Growth of Firms in the Internet Protocol Television and Hybrid Fibre-Coaxial Broadband Industry

Authors: Collen T. Masilo, Brane Semolic, Pieter Steyn

Abstract:

The Internet Protocol Television (IPTV) and Hybrid Fibre-Coaxial (HFC) Broadband industrial sector landscape are rapidly changing and organisations within the industry need to stay competitive by exploring new business models so that they can be able to offer new services and products to customers. The business challenge in this industrial sector is meeting or exceeding high customer expectations across multiple content delivery modes. The increasing challenges in the IPTV and HFC broadband industrial sector encourage service providers to form strategic partnerships with key suppliers, marketing partners, advertisers, and technology partners. The need to form enterprise collaborative networks poses a challenge for any organisation in this sector, in selecting the right strategic partners who will ensure that the organisation’s services and products are marketed in new markets. Partners who will ensure that customers are efficiently supported by meeting and exceeding their expectations. Lastly, selecting cooperation partners who will represent the organisation in a positive manner, and contribute to improving the performance of the organisation. Companies in the IPTV and HFC broadband industrial sector tend to form informal partnerships with suppliers, vendors, system integrators and technology partners. Generally, partnerships are formed without thorough analysis of the real reason a company is forming collaborations, without proper evaluations of prospective partners using specific selection criteria, and with ineffective performance monitoring of partners to ensure that a firm gains real long term benefits from its partners and gains competitive advantage. Similar tendencies are illustrated in the research case study and are based on Skyline Communications, a global leader in end-to-end, multi-vendor network management and operational support systems (OSS) solutions. The organisation’s flagship product is the DataMiner network management platform used by many operators across multiple industries and can be referred to as a smart system that intelligently manages complex technology ecosystems for its customers in the IPTV and HFC broadband industry. The approach of the research is to develop the most efficient business model that can be deployed to improve a strategic partnership programme in order to significantly improve the performance and growth of organisations participating in a collaborative network in the IPTV and HFC broadband industrial sector. This involves proposing and implementing a new strategic partnership model and its main features within the industry which should bring about significant benefits for all involved companies to achieve value add and an optimal growth strategy. The proposed business model has been developed based on the research of existing relationships, value chains and business requirements in this industrial sector and validated in 'Skyline Communications'. The outputs of the business model have been demonstrated and evaluated in the research business case study the IPTV and HFC broadband service provider 'Skyline Communications'.

Keywords: growth, partnership, selection criteria, value chain

Procedia PDF Downloads 114
1382 InAs/GaSb Superlattice Photodiode Array ns-Response

Authors: Utpal Das, Sona Das

Abstract:

InAs/GaSb type-II superlattice (T2SL) Mid-wave infrared (MWIR) focal plane arrays (FPAs) have recently seen rapid development. However, in small pixel size large format FPAs, the occurrence of high mesa sidewall surface leakage current is a major constraint necessitating proper surface passivation. A simple pixel isolation technique in InAs/GaSb T2SL detector arrays without the conventional mesa etching has been proposed to isolate the pixels by forming a more resistive higher band gap material from the SL, in the inter-pixel region. Here, a single step femtosecond (fs) laser anneal of the T2SL structure of the inter-pixel T2SL regions, have been used to increase the band gap between the pixels by QW-intermixing and hence increase isolation between the pixels. The p-i-n photodiode structure used here consists of a 506nm, (10 monolayer {ML}) InAs:Si (1x10¹⁸cm⁻³)/(10ML) GaSb SL as the bottom n-contact layer grown on an n-type GaSb substrate. The undoped absorber layer consists of 1.3µm, (10ML)InAs/(10ML)GaSb SL. The top p-contact layer is a 63nm, (10ML)InAs:Be(1x10¹⁸cm⁻³)/(10ML)GaSb T2SL. In order to improve the carrier transport, a 126nm of graded doped (10ML)InAs/(10ML)GaSb SL layer was added between the absorber and each contact layers. A 775nm 150fs-laser at a fluence of ~6mJ/cm² is used to expose the array where the pixel regions are masked by a Ti(200nm)-Au(300nm) cap. Here, in the inter-pixel regions, the p+ layer have been reactive ion etched (RIE) using CH₄+H₂ chemistry and removed before fs-laser exposure. The fs-laser anneal isolation improvement in 200-400μm pixels due to spatially selective quantum well intermixing for a blue shift of ~70meV in the inter-pixel regions is confirmed by FTIR measurements. Dark currents are measured between two adjacent pixels with the Ti(200nm)-Au(300nm) caps used as contacts. The T2SL quality in the active photodiode regions masked by the Ti-Au cap is hardly affected and retains the original quality of the detector. Although, fs-laser anneal of p+ only etched p-i-n T2SL diodes show a reduction in the reverse dark current, no significant improvement in the full RIE-etched mesa structures is noticeable. Hence for a 128x128 array fabrication of 8μm square pixels and 10µm pitch, SU8 polymer isolation after RIE pixel delineation has been used. X-n+ row contacts and Y-p+ column contacts have been used to measure the optical response of the individual pixels. The photo-response of these 8μm and other 200μm pixels under a 2ns optical pulse excitation from an Optical-Parametric-Oscillator (OPO), shows a peak responsivity of ~0.03A/W and 0.2mA/W, respectively, at λ~3.7μm. Temporal response of this detector array is seen to have a fast response ~10ns followed typical slow decay with ringing, attributed to impedance mismatch of the connecting co-axial cables. In conclusion, response times of a few ns have been measured in 8µm pixels of a 128x128 array. Although fs-laser anneal has been found to be useful in increasing the inter-pixel isolation in InAs/GaSb T2SL arrays by QW inter-mixing, it has not been found to be suitable for passivation of full RIE etched mesa structures with vertical walls on InAs/GaSb T2SL.

Keywords: band-gap blue-shift, fs-laser-anneal, InAs/GaSb T2SL, Inter-pixel isolation, ns-Response, photodiode array

Procedia PDF Downloads 139
1381 Important Factors Affecting the Effectiveness of Quality Control Circles

Authors: Sogol Zarafshan

Abstract:

The present study aimed to identify important factors affecting the effectiveness of quality control circles in a hospital, as well as rank them using a combination of fuzzy VIKOR and Grey Relational Analysis (GRA). The study population consisted of five academic members and five experts in the field of nursing working in a hospital, who were selected using a purposive sampling method. Also, a sample of 107 nurses was selected through a simple random sampling method using their employee codes and the random-number table. The required data were collected using a researcher-made questionnaire which consisted of 12 factors. The validity of this questionnaire was confirmed through giving the opinions of experts and academic members who participated in the present study, as well as performing confirmatory factor analysis. Its reliability also was verified (α=0.796). The collected data were analyzed using SPSS 22.0 and LISREL 8.8, as well as VIKOR–GRA and IPA methods. The results of ranking the factors affecting the effectiveness of quality control circles showed that the highest and lowest ranks were related to ‘Managers’ and supervisors’ support’ and ‘Group leadership’. Also, the highest hospital performance was for factors such as ‘Clear goals and objectives’ and ‘Group cohesiveness and homogeneity’, and the lowest for ‘Reward system’ and ‘Feedback system’, respectively. The results showed that although ‘Training the members’, ‘Using the right tools’ and ‘Reward system’ were factors that were of great importance, the organization’s performance for these factors was poor. Therefore, these factors should be paid more attention by the studied hospital managers and should be improved as soon as possible.

Keywords: Quality control circles, Fuzzy VIKOR, Grey Relational Analysis, Importance–Performance Analysis

Procedia PDF Downloads 118
1380 An Analysis of the Causes of SMEs Failure in Developing Countries: The Case of South Africa

Authors: Paul Saah, Charles Mbohwa, Nelson Sizwe Madonsela

Abstract:

In the context of developing countries, this study explores a crucial component of economic development by examining the reasons behind the failure of small and medium-sized enterprises (SMEs). SMEs are acknowledged as essential drivers of economic expansion, job creation, and poverty alleviation in emerging countries. This research uses South Africa as a case study to evaluate the reasons why SMEs fail in developing nations. This study explores a quantitative research methodology to investigate the complex causes of SME failures using statistical tools and reliability tests. To ensure the viability of data collection, a sample size of 400 small business owners was chosen using a non-probability selection technique. A closed-ended questionnaire was the primary technique used to obtain detailed information from the participants. Data was analysed and interpreted using computer software packages such as the Statistical Package for the Social Sciences (SPSS). According to the findings, the main reasons why SMEs fail in developing nations are a lack of strategic business planning, a lack of funding, poor management, a lack of innovation, a lack of business research and a low level of education and training. The results of this study show that SMEs can be sustainable and successful as long as they comprehend and use the suggested small business success determining variables into their daily operations. This implies that the more SMEs in developing countries implement the proposed determinant factors of small business success in their business operations the more the businesses are likely to succeed and vice versa.

Keywords: failure, developing countries, SMEs, economic development, South Africa

Procedia PDF Downloads 58
1379 Development of Programmed Cell Death Protein 1 Pathway-Associated Prognostic Biomarkers for Bladder Cancer Using Transcriptomic Databases

Authors: Shu-Pin Huang, Pai-Chi Teng, Hao-Han Chang, Chia-Hsin Liu, Yung-Lun Lin, Shu-Chi Wang, Hsin-Chih Yeh, Chih-Pin Chuu, Jiun-Hung Geng, Li-Hsin Chang, Wei-Chung Cheng, Chia-Yang Li

Abstract:

The emergence of immune checkpoint inhibitors (ICIs) targeting proteins like PD-1 and PD-L1 has changed the treatment paradigm of bladder cancer. However, not all patients benefit from ICIs, with some experiencing early death. There's a significant need for biomarkers associated with the PD-1 pathway in bladder cancer. Current biomarkers focus on tumor PD-L1 expression, but a more comprehensive understanding of PD-1-related biology is needed. Our study has developed a seven-gene risk score panel, employing a comprehensive bioinformatics strategy, which could serve as a potential prognostic and predictive biomarker for bladder cancer. This panel incorporates the FYN, GRAP2, TRIB3, MAP3K8, AKT3, CD274, and CD80 genes. Additionally, we examined the relationship between this panel and immune cell function, utilizing validated tools such as ESTIMATE, TIDE, and CIBERSORT. Our seven-genes panel has been found to be significantly associated with bladder cancer survival in two independent cohorts. The panel was also significantly correlated with tumor infiltration lymphocytes, immune scores, and tumor purity. These factors have been previously reported to have clinical implications on ICIs. The findings suggest the potential of a PD-1 pathway-based transcriptomic panel as a prognostic and predictive biomarker in bladder cancer, which could help optimize treatment strategies and improve patient outcomes.

Keywords: bladder cancer, programmed cell death protein 1, prognostic biomarker, immune checkpoint inhibitors, predictive biomarker

Procedia PDF Downloads 61
1378 Diagnostic and Prognostic Use of Kinetics of Microrna and Cardiac Biomarker in Acute Myocardial Infarction

Authors: V. Kuzhandai Velu, R. Ramesh

Abstract:

Background and objectives: Acute myocardial infarction (AMI) is the most common cause of mortality and morbidity. Over the last decade, microRNAs (miRs) have emerged as a potential marker for detecting AMI. The current study evaluates the kinetics and importance of miRs in the differential diagnosis of ST-segment elevated MI (STEMI) and non-STEMI (NSTEMI) and its correlation to conventional biomarkers and to predict the immediate outcome of AMI for arrhythmias and left ventricular (LV) dysfunction. Materials and Method: A total of 100 AMI patients were recruited for the study. Routine cardiac biomarker and miRNA levels were measured during diagnosis and serially at admission, 6, 12, 24, and 72hrs. The baseline biochemical parameters were analyzed. The expression of miRs was compared between STEMI and NSTEMI at different time intervals. Diagnostic utility of miR-1, miR-133, miR-208, and miR-499 levels were analyzed by using RT-PCR and with various diagnostics statistical tools like ROC, odds ratio, and likelihood ratio. Results: The miR-1, miR-133, and miR-499 showed peak concentration at 6 hours, whereas miR-208 showed high significant differences at all time intervals. miR-133 demonstrated the maximum area under the curve at different time intervals in the differential diagnosis of STEMI and NSTEMI which was followed by miR-499 and miR-208. Evaluation of miRs for predicting arrhythmia and LV dysfunction using admission sample demonstrated that miR-1 (OR = 8.64; LR = 1.76) and miR-208 (OR = 26.25; LR = 5.96) showed maximum odds ratio and likelihood respectively. Conclusion: Circulating miRNA showed a highly significant difference between STEMI and NSTEMI in AMI patients. The peak was much earlier than the conventional biomarkers. miR-133, miR-208, and miR-499 can be used in the differential diagnosis of STEMI and NSTEMI, whereas miR-1 and miR-208 could be used in the prediction of arrhythmia and LV dysfunction, respectively.

Keywords: myocardial infarction, cardiac biomarkers, microRNA, arrhythmia, left ventricular dysfunction

Procedia PDF Downloads 112
1377 Simulating Human Behavior in (Un)Built Environments: Using an Actor Profiling Method

Authors: Hadas Sopher, Davide Schaumann, Yehuda E. Kalay

Abstract:

This paper addresses the shortcomings of architectural computation tools in representing human behavior in built environments, prior to construction and occupancy of those environments. Evaluating whether a design fits the needs of its future users is currently done solely post construction, or is based on the knowledge and intuition of the designer. This issue is of high importance when designing complex buildings such as hospitals, where the quality of treatment as well as patient and staff satisfaction are of major concern. Existing computational pre-occupancy human behavior evaluation methods are geared mainly to test ergonomic issues, such as wheelchair accessibility, emergency egress, etc. As such, they rely on Agent Based Modeling (ABM) techniques, which emphasize the individual user. Yet we know that most human activities are social, and involve a number of actors working together, which ABM methods cannot handle. Therefore, we present an event-based model that manages the interaction between multiple Actors, Spaces, and Activities, to describe dynamically how people use spaces. This approach requires expanding the computational representation of Actors beyond their physical description, to include psychological, social, cultural, and other parameters. The model presented in this paper includes cognitive abilities and rules that describe the response of actors to their physical and social surroundings, based on the actors’ internal status. The model has been applied in a simulation of hospital wards, and showed adaptability to a wide variety of situated behaviors and interactions.

Keywords: agent based modeling, architectural design evaluation, event modeling, human behavior simulation, spatial cognition

Procedia PDF Downloads 241
1376 Qualitative Case Studies in Reading Specialist Education

Authors: Carol Leroy

Abstract:

This presentation focuses on the analysis qualitative case studies in the graduate education of reading specialists. The presentation describes the development and application of an integrated conceptual framework for reading specialist education, drawing on Robert Stake’s work on case study research, Kenneth Zeichner’s work on professional learning, and various tools for reading assessment (e.g. the Qualitative Reading Inventory). Social constructivist theory is used to provide intersecting links between the various influences on the processes used to assess and teaching reading within the case study framework. Illustrative examples are described to show the application of the framework in reading specialist education in a teaching clinic at a large urban university. Central to education of reading specialists in this teaching clinic is the collection, analysis and interpretation of data for the design and implementation of reading and writing programs for struggling readers and writers. The case study process involves the integrated interpretation of data, which is central to qualitative case study inquiry. An emerging theme in this approach to graduate education is the ambiguity and uncertainty that governs work with the adults and children who attend the clinic for assistance. Tensions and contradictions are explored insofar as they reveal overlapping but intersecting frameworks for case study analysis in the area of literacy education. An additional theme is the interplay of multiple layers of data with a resulting depth that goes beyond the practical need of the client and toward the deeper pedagogical growth of the reading specialist. The presentation makes a case for the value of qualitative case studies in reading specialist education. Further, the use of social constructivism as a unifying paradigm provides a robustness to the conceptual framework as a tool for understanding the pedagogy that is involved.

Keywords: assessment, case study, professional education, reading

Procedia PDF Downloads 440
1375 Analysis of Knowledge Circulation in Digital Learning Environments: A Case Study of the MOOC 'Communication des Organisations'

Authors: Hasna Mekkaoui Alaoui, Mariem Mekkaoui Alaoui

Abstract:

In a context marked by a growing and pressing demand for online training within Moroccan universities, massive open online courses (Moocs) are undergoing constant evolution, amplified by the widespread use of digital technology and accentuated by the Coronavirus pandemic. However, despite their growing popularity and expansion, these courses are still lacking in terms of tools, enabling teachers and researchers to carry out a fine-grained analysis of the learning processes taking place within them. What's more, the circulation and sharing of knowledge within these environments is becoming increasingly important. The crucial aspect of traceability emerges here, as MOOCs record and generate traces from the most minute to the most visible. This leads us to consider traceability as a valuable approach in the field of educational research, where the trace is envisaged as a research tool in its own right. In this exploratory research project, we are looking at aspects of community knowledge sharing based on traces observed in the "Communication des organisations" Mooc. Focusing in particular on the mediating trace and its impact in identifying knowledge circulation processes in this learning space, we have mobilized the traces of video capsules as an index of knowledge circulation in the Mooc device. Our study uses a methodological approach based on thematic analysis, and although the results show that learners reproduce knowledge from different video vignettes in almost identical ways, they do not limit themselves to the knowledge provided to them. This research offers concrete perspectives for improving the dynamics of online devices, with a potentially positive impact on the quality of online university teaching.

Keywords: circulation, index, digital environments, mediation., trace

Procedia PDF Downloads 49
1374 Chemical Synthesis and Microwave Sintering of SnO2-Based Nanoparticles for Varistor Films

Authors: Glauco M. M. M. Lustosa, João Paulo C. Costa, Leinig Antônio Perazolli, Maria Aparecida Zaghete

Abstract:

SnO2 has electrical conductivity due to the excess of electrons and structural defects, being its electrical behavior highly dependent on sintering temperature and chemical composition. The addition of metals modifiers into the crystalline structure can improve and controlling the behavior of some semiconductor oxides that can therefore develop different applications such as varistors (ceramic with non-ohmic behavior between current and voltage, i.e. conductive during normal operation and resistive during overvoltage). The polymeric precursor method, based on the complexation reaction between metal ion and policarboxylic acid and then polymerized with ethylene glycol, was used to obtain nanopowders ceramic. The metal immobilization reduces its segregation during the decomposition of the polyester resulting in a crystalline oxide with high chemical homogeneity. The preparation of films from ceramics nanoparticles using electrophoretic deposition method (EPD) brings prospects for a new generation of smaller size devices with easy integration technology. EPD allows to control time and current and therefore it can have control of the thickness, surface roughness and the film density, quickly and with low production costs. The sintering process is key to control size and grain boundary density of the film. In this step, there is the diffusion of metals that promote densification and control of intrinsic defects or change these defects which will form and modify the potential barrier in the grain boundary. The use of microwave oven for sintering is an advantageous process due to the fast and homogeneous heating rate, promoting the diffusion and densification without irregular grain growth. This research was done a comparative study of sintering temperature by use of zinc as modifier agent to verify the influence on sintering step aiming to promote densification and grain growth, which influences the potential barrier formation and then changed the electrical behavior. SnO2-nanoparticles were obtained with 1 %mol of ZnO + 0.05 %mol of Nb2O5 (SZN), deposited as film through EPD (voltage 2 kV, time of 10 min) on Si/Pt substrate. Sintering was made in a microwave oven at 800, 900 and 1000 °C. For complete coverage of the substrate by nanoparticles with low surface roughness and uniform thickness was added 0.02 g of solid iodine in alcoholic suspension SnO2 to increase particle surface charge. They were also used magneto in EPD system that improved the deposition rate forming a compact film. Using a scanning electron microscope of high resolution (SEM_FEG) it was observed nanoparticles with average size between 10-20 nm, after sintering the average size was 150 to 200 nm and thickness of 5 µm. Also, it was verified that the temperature at 1000 °C was the most efficient in sintering. The best sintering time was also recorded and determined as 40 minutes. After sintering, the films were recovered with Cr3+ ions layer by EPD, then the films were again thermally treated. The electrical characterizations (nonlinear coefficient of 11.4, voltage rupture of ~60 V and leakage current = 4.8x10−6 A), allow considering the new methodology suitable for prepare SnO2-based varistor applied for development of electrical protection devices for low voltage.

Keywords: chemical synthesis, electrophoretic deposition, microwave sintering, tin dioxide

Procedia PDF Downloads 253
1373 Coupled Hydro-Geomechanical Modeling of Oil Reservoir Considering Non-Newtonian Fluid through a Fracture

Authors: Juan Huang, Hugo Ninanya

Abstract:

Oil has been used as a source of energy and supply to make materials, such as asphalt or rubber for many years. This is the reason why new technologies have been implemented through time. However, research still needs to continue increasing due to new challenges engineers face every day, just like unconventional reservoirs. Various numerical methodologies have been applied in petroleum engineering as tools in order to optimize the production of reservoirs before drilling a wellbore, although not all of these have the same efficiency when talking about studying fracture propagation. Analytical methods like those based on linear elastic fractures mechanics fail to give a reasonable prediction when simulating fracture propagation in ductile materials whereas numerical methods based on the cohesive zone method (CZM) allow to represent the elastoplastic behavior in a reservoir based on a constitutive model; therefore, predictions in terms of displacements and pressure will be more reliable. In this work, a hydro-geomechanical coupled model of horizontal wells in fractured rock was developed using ABAQUS; both extended element method and cohesive elements were used to represent predefined fractures in a model (2-D). A power law for representing the rheological behavior of fluid (shear-thinning, power index <1) through fractures and leak-off rate permeating to the matrix was considered. Results have been showed in terms of aperture and length of the fracture, pressure within fracture and fluid loss. It was showed a high infiltration rate to the matrix as power index decreases. A sensitivity analysis is conclusively performed to identify the most influential factor of fluid loss.

Keywords: fracture, hydro-geomechanical model, non-Newtonian fluid, numerical analysis, sensitivity analysis

Procedia PDF Downloads 189
1372 Creating Systems Change: Implementing Cross-Sector Initiatives within the Justice System to Support Ontarians with Mental Health and Addictions Needs

Authors: Tania Breton, Dorina Simeonov, Shauna MacEachern

Abstract:

Ontario’s 10 Year Mental Health and Addictions Strategy has included the establishment of 18 Service Collaborative across the province; cross-sector tables in a specific region coming together to explore mental health and addiction system needs and adopting an intervention to address that need. The process is community led and supported by implementation teams from the Centre for Addiction and Mental Health (CAMH), using the framework of implementation science (IS) to enable evidence-based and sustained change. These justice initiatives are focused on the intersection of the justice system and the mental health and addiction systems. In this presentation, we will share the learnings, achievements and challenges of implementing innovative practices to the mental health and addictions needs of Ontarians within the justice system. Specifically, we will focus on the key points across the justice system - from early intervention and trauma-informed, culturally appropriate services to post-sentence support and community reintegration. Our approach to this work involves external implementation support from the CAMH team including coaching, knowledge exchange, evaluation, Aboriginal engagement and health equity expertise. Agencies supported the implementation of tools and processes which changed practice at the local level. These practices are being scaled up across Ontario and community agencies have come together in an unprecedented collaboration and there is a shared vision of the issues overlapping between the mental health, addictions and justice systems. Working with ministry partners has allowed space for innovation and created an environment where better approaches can be nurtured and spread.

Keywords: implementation, innovation, early identification, mental health and addictions, prevention, systems

Procedia PDF Downloads 350
1371 Influence of Internal Topologies on Components Produced by Selective Laser Melting: Numerical Analysis

Authors: C. Malça, P. Gonçalves, N. Alves, A. Mateus

Abstract:

Regardless of the manufacturing process used, subtractive or additive, material, purpose and application, produced components are conventionally solid mass with more or less complex shape depending on the production technology selected. Aspects such as reducing the weight of components, associated with the low volume of material required and the almost non-existent material waste, speed and flexibility of production and, primarily, a high mechanical strength combined with high structural performance, are competitive advantages in any industrial sector, from automotive, molds, aviation, aerospace, construction, pharmaceuticals, medicine and more recently in human tissue engineering. Such features, properties and functionalities are attained in metal components produced using the additive technique of Rapid Prototyping from metal powders commonly known as Selective Laser Melting (SLM), with optimized internal topologies and varying densities. In order to produce components with high strength and high structural and functional performance, regardless of the type of application, three different internal topologies were developed and analyzed using numerical computational tools. The developed topologies were numerically submitted to mechanical compression and four point bending testing. Finite Element Analysis results demonstrate how different internal topologies can contribute to improve mechanical properties, even with a high degree of porosity relatively to fully dense components. Results are very promising not only from the point of view of mechanical resistance, but especially through the achievement of considerable variation in density without loss of structural and functional high performance.

Keywords: additive manufacturing, internal topologies, porosity, rapid prototyping, selective laser melting

Procedia PDF Downloads 320
1370 A Visual Analytics Tool for the Structural Health Monitoring of an Aircraft Panel

Authors: F. M. Pisano, M. Ciminello

Abstract:

Aerospace, mechanical, and civil engineering infrastructures can take advantages from damage detection and identification strategies in terms of maintenance cost reduction and operational life improvements, as well for safety scopes. The challenge is to detect so called “barely visible impact damage” (BVID), due to low/medium energy impacts, that can progressively compromise the structure integrity. The occurrence of any local change in material properties, that can degrade the structure performance, is to be monitored using so called Structural Health Monitoring (SHM) systems, in charge of comparing the structure states before and after damage occurs. SHM seeks for any "anomalous" response collected by means of sensor networks and then analyzed using appropriate algorithms. Independently of the specific analysis approach adopted for structural damage detection and localization, textual reports, tables and graphs describing possible outlier coordinates and damage severity are usually provided as artifacts to be elaborated for information extraction about the current health conditions of the structure under investigation. Visual Analytics can support the processing of monitored measurements offering data navigation and exploration tools leveraging the native human capabilities of understanding images faster than texts and tables. Herein, a SHM system enrichment by integration of a Visual Analytics component is investigated. Analytical dashboards have been created by combining worksheets, so that a useful Visual Analytics tool is provided to structural analysts for exploring the structure health conditions examined by a Principal Component Analysis based algorithm.

Keywords: interactive dashboards, optical fibers, structural health monitoring, visual analytics

Procedia PDF Downloads 112
1369 Surface-Enhanced Raman Detection in Chip-Based Chromatography via a Droplet Interface

Authors: Renata Gerhardt, Detlev Belder

Abstract:

Raman spectroscopy has attracted much attention as a structurally descriptive and label-free detection method. It is particularly suited for chemical analysis given as it is non-destructive and molecules can be identified via the fingerprint region of the spectra. In this work possibilities are investigated how to integrate Raman spectroscopy as a detection method for chip-based chromatography, making use of a droplet interface. A demanding task in lab-on-a-chip applications is the specific and sensitive detection of low concentrated analytes in small volumes. Fluorescence detection is frequently utilized but restricted to fluorescent molecules. Furthermore, no structural information is provided. Another often applied technique is mass spectrometry which enables the identification of molecules based on their mass to charge ratio. Additionally, the obtained fragmentation pattern gives insight into the chemical structure. However, it is only applicable as an end-of-the-line detection because analytes are destroyed during measurements. In contrast to mass spectrometry, Raman spectroscopy can be applied on-chip and substances can be processed further downstream after detection. A major drawback of Raman spectroscopy is the inherent weakness of the Raman signal, which is due to the small cross-sections associated with the scattering process. Enhancement techniques, such as surface enhanced Raman spectroscopy (SERS), are employed to overcome the poor sensitivity even allowing detection on a single molecule level. In SERS measurements, Raman signal intensity is improved by several orders of magnitude if the analyte is in close proximity to nanostructured metal surfaces or nanoparticles. The main gain of lab-on-a-chip technology is the building block-like ability to seamlessly integrate different functionalities, such as synthesis, separation, derivatization and detection on a single device. We intend to utilize this powerful toolbox to realize Raman detection in chip-based chromatography. By interfacing on-chip separations with a droplet generator, the separated analytes are encapsulated into numerous discrete containers. These droplets can then be injected with a silver nanoparticle solution and investigated via Raman spectroscopy. Droplet microfluidics is a sub-discipline of microfluidics which instead of a continuous flow operates with the segmented flow. Segmented flow is created by merging two immiscible phases (usually an aqueous phase and oil) thus forming small discrete volumes of one phase in the carrier phase. The study surveys different chip designs to realize coupling of chip-based chromatography with droplet microfluidics. With regards to maintaining a sufficient flow rate for chromatographic separation and ensuring stable eluent flow over the column different flow rates of eluent and oil phase are tested. Furthermore, the detection of analytes in droplets with surface enhanced Raman spectroscopy is examined. The compartmentalization of separated compounds preserves the analytical resolution since the continuous phase restricts dispersion between the droplets. The droplets are ideal vessels for the insertion of silver colloids thus making use of the surface enhancement effect and improving the sensitivity of the detection. The long-term goal of this work is the first realization of coupling chip based chromatography with droplets microfluidics to employ surface enhanced Raman spectroscopy as means of detection.

Keywords: chip-based separation, chip LC, droplets, Raman spectroscopy, SERS

Procedia PDF Downloads 227