Search results for: external innovation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3861

Search results for: external innovation

381 Traumatic Brain Injury Induced Lipid Profiling of Lipids in Mice Serum Using UHPLC-Q-TOF-MS

Authors: Seema Dhariwal, Kiran Maan, Ruchi Baghel, Apoorva Sharma, Poonam Rana

Abstract:

Introduction: Traumatic brain injury (TBI) is defined as the temporary or permanent alteration in brain function and pathology caused by an external mechanical force. It represents the leading cause of mortality and morbidity among children and youth individuals. Various models of TBI in rodents have been developed in the laboratory to mimic the scenario of injury. Blast overpressure injury is common among civilians and military personnel, followed by accidents or explosive devices. In addition to this, the lateral Controlled cortical impact (CCI) model mimics the blunt, penetrating injury. Method: In the present study, we have developed two different mild TBI models using blast and CCI injury. In the blast model, helium gas was used to create an overpressure of 130 kPa (±5) via a shock tube, and CCI injury was induced with an impact depth of 1.5mm to create diffusive and focal injury, respectively. C57BL/6J male mice (10-12 weeks) were divided into three groups: (1) control, (2) Blast treated, (3) CCI treated, and were exposed to different injury models. Serum was collected on Day1 and day7, followed by biphasic extraction using MTBE/Methanol/Water. Prepared samples were separated on Charged Surface Hybrid (CSH) C18 column and acquired on UHPLC-Q-TOF-MS using ESI probe with inhouse optimized parameters and method. MS peak list was generated using Markerview TM. Data were normalized, Pareto-scaled, and log-transformed, followed by multivariate and univariate analysis in metaboanalyst. Result and discussion: Untargeted profiling of lipids generated extensive data features, which were annotated through LIPID MAPS® based on their m/z and were further confirmed based on their fragment pattern by LipidBlast. There is the final annotation of 269 features in the positive and 182 features in the negative mode of ionization. PCA and PLS-DA score plots showed clear segregation of injury groups to controls. Among various lipids in mild blast and CCI, five lipids (Glycerophospholipids {PC 30:2, PE O-33:3, PG 28:3;O3 and PS 36:1 } and fatty acyl { FA 21:3;O2}) were significantly altered in both injury groups at Day 1 and Day 7, and also had VIP score >1. Pathway analysis by Biopan has also shown hampered synthesis of Glycerolipids and Glycerophospholipiods, which coincides with earlier reports. It could be a direct result of alteration in the Acetylcholine signaling pathway in response to TBI. Understanding the role of a specific class of lipid metabolism, regulation and transport could be beneficial to TBI research since it could provide new targets and determine the best therapeutic intervention. This study demonstrates the potential lipid biomarkers which can be used for injury severity diagnosis and identification irrespective of injury type (diffusive or focal).

Keywords: LipidBlast, lipidomic biomarker, LIPID MAPS®, TBI

Procedia PDF Downloads 113
380 Mentoring of Health Professionals to Ensure Better Child-Birth and Newborn Care in Bihar, India: An Intervention Study

Authors: Aboli Gore, Aritra Das, Sunil Sonthalia, Tanmay Mahapatra, Sridhar Srikantiah, Hemant Shah

Abstract:

AMANAT is an initiative, taken in collaboration with the Government of Bihar, aimed at improving the Quality of Maternal and Neonatal care services at Bihar’s public health facilities – those offering either the Basic Emergency Obstetric and Neonatal care (BEmONC) or Comprehensive Emergency Obstetric and Neonatal care (CEmONC) services. The effectiveness of this program is evaluated by conducting cross-sectional assessments at the concerned facilities prior to (baseline) and following completion (endline) of intervention. Direct Observation of Delivery (DOD) methodology is employed for carrying out the baseline and endline assessments – through which key obstetric and neonatal care practices among the Health Care Providers (especially the nurses) are assessed quantitatively by specially trained nursing professionals. Assessment of vitals prior to delivery improved during all three phases of BEmONC and all four phases of CEmONC training with statistically significant improvement noted in: i) pulse measurement in BEmONC phase 2 (9% to 68%), 3 (4% to 57%) & 4 (14% to 59%) and CEmONC phase 2 (7% to 72%) and 3 (0% to 64%); ii) blood pressure measurement in BEmONC phase 2 (27% to 84%), 3 (21% to 76%) & 4 (36% to 71%) and CEmONC phase 2 (23% to 76%) and 3 (2% to 70%); iii) fetal heart rate measurement in BEmONC phase 2 (10% to 72%), 3 (11% to 77%) & 4 (13% to 64%) and CEmONC phase 1 (24% to 38%), 2 (14% to 82%) and 3 (1% to 73%); and iv) abdominal examination in BEmONC phase 2 (14% to 59%), 3 (3% to 59%) & 4 (6% to 56%) and CEmONC phase 1 (0% to 24%), 2 (7% to 62%) & 3 (0% to 62%). Regarding infection control, wearing of apron, mask and cap by the delivery conductors improved significantly in all BEmONC phases. Similarly, the practice of handwashing improved in all BEmONC and CEmONC phases. Even on disaggregation, the handwashing showed significant improvement in all phases but CEmONC phase-4. Not only the positive practices related to handwashing improved but also negative practices such as turning off the tap with bare hands declined significantly in the aforementioned phases. Significant decline was also noted in negative maternal care practices such as application of fundal pressure for hastening the delivery process and administration of oxytocin prior to delivery. One of the notable achievement of AMANAT is an improvement in active management of the third stage of labor (AMTSL). The overall AMTSL (including administration of oxytocin or other uterotonics uterotonic in proper dose, route and time along with controlled cord traction and uterine massage) improved in all phases of BEmONC and CEmONC mentoring. Another key area of improvement, across phases, was in proper cutting/clamping of the umbilical cord. AMANAT mentoring also led to improvement in important immediate newborn care practices such as initiation of skin-to-skin care and timely initiation of breastfeeding. The next phase of the mentoring program seeks to institutionalize mentoring across the state that could potentially perpetuate improvement with minimal external intervention.

Keywords: capacity building, nurse-mentoring, quality of care, pregnancy, newborn care

Procedia PDF Downloads 161
379 Exploring Communities of Practice through Public Health Walks for Nurse Education

Authors: Jacqueline P. Davies

Abstract:

Introduction: Student nurses must develop skills in observation, communication and reflection as well as public health knowledge from their first year of training. This paper will explain a method developed for students to collect their own findings about public health in urban areas. These areas are both rich in the history of old public health that informs the content of many traditional public health walks, but are also locations where new public health concerns about chronic disease are concentrated. The learning method explained in this paper enables students to collect their own data and write original work as first year students. Examples of their findings will be given. Methodology: In small groups, health care students are instructed to walk in neighbourhoods near to the hospitals they will soon attend as apprentice nurses. On their walks, they wander slowly, engage in conversations, and enter places open to the public. As they drift, they observe with all five senses in the real three dimensional world to collect data for their reflective accounts of old and new public health. They are encouraged to stop for refreshments and taste, as well as look, hear, smell, and touch while on their walk. They reflect as a group and later develop an individual reflective account in which they write up their deep reflections about what they observed on their walk. In preparation for their walk, they are encouraged to look at studies of quality of Life and other neighbourhood statistics as well as undertaking a risk assessment for their walk. Findings: Reflecting on their walks, students apply theoretical concepts around social determinants of health and health inequalities to develop their understanding of communities in the neighbourhoods visited. They write about the treasured historical architecture made of stone, bronze and marble which have outlived those who built them; but also how the streets are used now. The students develop their observations into thematic analyses such as: what we drink as illustrated by the empty coke can tossed into a now disused drinking fountain; the shift in home-life balance illustrated by streets where families once lived over the shop which are now walked by commuters weaving around each other as they talk on their mobile phones; and security on the street, with CCTV cameras placed at regular intervals, signs warning trespasses and barbed wire; but little evidence of local people watching the street. Conclusion: In evaluations of their first year, students have reported the health walk as one of their best experiences. The innovative approach was commended by the UK governing body of nurse education and it received a quality award from the nurse education funding body. This approach to education allows students to develop skills in the real world and write original work.

Keywords: education, innovation, nursing, urban

Procedia PDF Downloads 287
378 Using Teachers' Perceptions of Science Outreach Activities to Design an 'Optimum' Model of Science Outreach

Authors: Victoria Brennan, Andrea Mallaburn, Linda Seton

Abstract:

Science outreach programmes connect school pupils with external agencies to provide activities and experiences that enhance their exposure to science. It can be argued that these programmes not only aim to support teachers with curriculum engagement and promote scientific literacy but also provide pivotal opportunities to spark scientific interest in students. In turn, a further objective of these programmes is to increase awareness of career opportunities within this field. Although outreach work is also often described as a fun and satisfying venture, a plethora of researchers express caution to how successful the processes are to increases engagement post-16 in science. When researching the impact of outreach programmes, it is often student feedback regarding the activities or enrolment numbers to particular science courses post-16, which are generated and analysed. Although this is informative, the longevity of the programme’s impact could be better informed by the teacher’s perceptions; the evidence of which is far more limited in the literature. In addition, there are strong suggestions that teachers can have an indirect impact on a student’s own self-concept. These themes shape the focus and importance of this ongoing research project as it presents the rationale that teachers are under-used resources when it comes to considering the design of science outreach programmes. Therefore, the end result of the research will consist of a presentation of an ‘optimum’ model of outreach. The result of which should be of interest to the wider stakeholders such as universities or private or government organisations who design science outreach programmes in the hope to recruit future scientists. During phase one, questionnaires (n=52) and interviews (n=8) have generated both quantitative and qualitative data. These have been analysed using the Wilcoxon non-parametric test to compare teachers’ perceptions of science outreach interventions and thematic analysis for open-ended questions. Both of these research activities provide an opportunity for a cross-section of teacher opinions of science outreach to be obtained across all educational levels. Therefore, an early draft of the ‘optimum’ model of science outreach delivery was generated using both the wealth of literature and primary data. This final (ongoing) phase aims to refine this model using teacher focus groups to provide constructive feedback about the proposed model. The analysis uses principles of modified Grounded Theory to ensure that focus group data is used to further strengthen the model. Therefore, this research uses a pragmatist approach as it aims to focus on the strengths of the different paradigms encountered to ensure the data collected will provide the most suitable information to create an improved model of sustainable outreach. The results discussed will focus on this ‘optimum’ model and teachers’ perceptions of benefits and drawbacks when it comes to engaging with science outreach work. Although the model is still a ‘work in progress’, it provides both insight into how teachers feel outreach delivery can be a sustainable intervention tool within the classroom and what providers of such programmes should consider when designing science outreach activities.

Keywords: educational partnerships, science education, science outreach, teachers

Procedia PDF Downloads 128
377 Examination of Porcine Gastric Biomechanics in the Antrum Region

Authors: Sif J. Friis, Mette Poulsen, Torben Strom Hansen, Peter Herskind, Jens V. Nygaard

Abstract:

Gastric biomechanics governs a large range of scientific and engineering fields, from gastric health issues to interaction mechanisms between external devices and the tissue. Determination of mechanical properties of the stomach is, thus, crucial, both for understanding gastric pathologies as well as for the development of medical concepts and device designs. Although the field of gastric biomechanics is emerging, advances within medical devices interacting with the gastric tissue could greatly benefit from an increased understanding of tissue anisotropy and heterogeneity. Thus, in this study, uniaxial tensile tests of gastric tissue were executed in order to study biomechanical properties within the same individual as well as across individuals. With biomechanical tests in the strain domain, tissue from the antrum region of six porcine stomachs was tested using eight samples from each stomach (n = 48). The samples were cut so that they followed dominant fiber orientations. Accordingly, from each stomach, four samples were longitudinally oriented, and four samples were circumferentially oriented. A step-wise stress relaxation test with five incremental steps up to 25 % strain with 200 s rest periods for each step was performed, followed by a 25 % strain ramp test with three different strain rates. Theoretical analysis of the data provided stress-strain/time curves as well as 20 material parameters (e.g., stiffness coefficients, dissipative energy densities, and relaxation time coefficients) used for statistical comparisons between samples from the same stomach as well as in between stomachs. Results showed that, for the 20 material parameters, heterogeneity across individuals, when extracting samples from the same area, was in the same order of variation as the samples within the same stomach. For samples from the same stomach, the mean deviation percentage for all 20 parameters was 21 % and 18 % for longitudinal and circumferential orientations, compared to 25 % and 19 %, respectively, for samples across individuals. This observation was also supported by a nonparametric one-way ANOVA analysis, where results showed that the 20 material parameters from each of the six stomachs came from the same distribution with a level of statistical significance of P > 0.05. Direction-dependency was also examined, and it was found that the maximum stress for longitudinal samples was significantly higher than for circumferential samples. However, there were no significant differences in the 20 material parameters, with the exception of the equilibrium stiffness coefficient (P = 0.0039) and two other stiffness coefficients found from the relaxation tests (P = 0.0065, 0.0374). Nor did the stomach tissue show any significant differences between the three strain-rates used in the ramp test. Heterogeneity within the same region has not been examined earlier, yet, the importance of the sampling area has been demonstrated in this study. All material parameters found are essential to understand the passive mechanics of the stomach and may be used for mathematical and computational modeling. Additionally, an extension of the protocol used may be relevant for compiling a comparative study between the human stomach and the pig stomach.

Keywords: antrum region, gastric biomechanics, loading-unloading, stress relaxation, uniaxial tensile testing

Procedia PDF Downloads 432
376 Structural Balance and Creative Tensions in New Product Development Teams

Authors: Shankaran Sitarama

Abstract:

New Product Development involves team members coming together and working in teams to come up with innovative solutions to problems, resulting in new products. Thus, a core attribute of a successful NPD team is their creativity and innovation. They need to be creative as a group, generating a breadth of ideas and innovative solutions that solve or address the problem they are targeting and meet the user’s needs. They also need to be very efficient in their teamwork as they work through the various stages of the development of these ideas, resulting in a POC (proof-of-concept) implementation or a prototype of the product. There are two distinctive traits that the teams need to have, one is ideational creativity, and the other is effective and efficient teamworking. There are multiple types of tensions that each of these traits cause in the teams, and these tensions reflect in the team dynamics. Ideational conflicts arising out of debates and deliberations increase the collective knowledge and affect the team creativity positively. However, the same trait of challenging each other’s viewpoints might lead the team members to be disruptive, resulting in interpersonal tensions, which in turn lead to less than efficient teamwork. Teams that foster and effectively manage these creative tensions are successful, and teams that are not able to manage these tensions show poor team performance. In this paper, it explore these tensions as they result in the team communication social network and propose a Creative Tension Balance index along the lines of Degree of Balance in social networks that has the potential to highlight the successful (and unsuccessful) NPD teams. Team communication reflects the team dynamics among team members and is the data set for analysis. The emails between the members of the NPD teams are processed through a semantic analysis algorithm (LSA) to analyze the content of communication and a semantic similarity analysis to arrive at a social network graph that depicts the communication amongst team members based on the content of communication. This social network is subjected to traditional social network analysis methods to arrive at some established metrics and structural balance analysis metrics. Traditional structural balance is extended to include team interaction pattern metrics to arrive at a creative tension balance metric that effectively captures the creative tensions and tension balance in teams. This CTB (Creative Tension Balance) metric truly captures the signatures of successful and unsuccessful (dissonant) NPD teams. The dataset for this research study includes 23 NPD teams spread out over multiple semesters and computes this CTB metric and uses it to identify the most successful and unsuccessful teams by classifying these teams into low, high and medium performing teams. The results are correlated to the team reflections (for team dynamics and interaction patterns), the team self-evaluation feedback surveys (for teamwork metrics) and team performance through a comprehensive team grade (for high and low performing team signatures).

Keywords: team dynamics, social network analysis, new product development teamwork, structural balance, NPD teams

Procedia PDF Downloads 79
375 Weapon-Being: Weaponized Design and Object-Oriented Ontology in Hypermodern Times

Authors: John Dimopoulos

Abstract:

This proposal attempts a refabrication of Heidegger’s classic thing-being and object-being analysis in order to provide better ontological tools for understanding contemporary culture, technology, and society. In his work, Heidegger sought to understand and comment on the problem of technology in an era of rampant innovation and increased perils for society and the planet. Today we seem to be at another crossroads in this course, coming after postmodernity, during which dreams and dangers of modernity augmented with critical speculations of the post-war era take shape. The new era which we are now living in, referred to as hypermodernity by researchers in various fields such as architecture and cultural theory, is defined by the horizontal implementation of digital technologies, cybernetic networks, and mixed reality. Technology today is rapidly approaching a turning point, namely the point of no return for humanity’s supervision over its creations. The techno-scientific civilization of the 21st century creates a series of problems, progressively more difficult and complex to solve and impossible to ignore, climate change, data safety, cyber depression, and digital stress being some of the most prevalent. Humans often have no other option than to address technology-induced problems with even more technology, as in the case of neuron networks, machine learning, and AI, thus widening the gap between creating technological artifacts and understanding their broad impact and possible future development. As all technical disciplines and particularly design, become enmeshed in a matrix of digital hyper-objects, a conceptual toolbox that allows us to handle the new reality becomes more and more necessary. Weaponized design, prevalent in many fields, such as social and traditional media, urban planning, industrial design, advertising, and the internet in general, hints towards an increase in conflicts. These conflicts between tech companies, stakeholders, and users with implications in politics, work, education, and production as apparent in the cases of Amazon workers’ strikes, Donald Trump’s 2016 campaign, Facebook and Microsoft data scandals, and more are often non-transparent to the wide public’s eye, thus consolidating new elites and technocratic classes and making the public scene less and less democratic. The new category proposed, weapon-being, is outlined in respect to the basic function of reducing complexity, subtracting materials, actants, and parameters, not strictly in favor of a humanistic re-orientation but in a more inclusive ontology of objects and subjects. Utilizing insights of Object-Oriented Ontology (OOO) and its schematization of technological objects, an outline for a radical ontology of technology is approached.

Keywords: design, hypermodernity, object-oriented ontology, weapon-being

Procedia PDF Downloads 152
374 Ethical, Legal and Societal Aspects of Unmanned Aircraft in Defence

Authors: Henning Lahmann, Benjamyn I. Scott, Bart Custers

Abstract:

Suboptimal adoption of AI in defence organisations carries risks for the protection of the freedom, safety, and security of society. Despite the vast opportunities that defence AI-technology presents, there are also a variety of ethical, legal, and societal concerns. To ensure the successful use of AI technology by the military, ethical, legal, and societal aspects (ELSA) need to be considered, and their concerns continuously addressed at all levels. This includes ELSA considerations during the design, manufacturing and maintenance of AI-based systems, as well as its utilisation via appropriate military doctrine and training. This raises the question how defence organisations can remain strategically competitive and at the edge of military innovation, while respecting the values of its citizens. This paper will explain the set-up and share preliminary results of a 4-year research project commissioned by the National Research Council in the Netherlands on the ethical, legal, and societal aspects of AI in defence. The project plans to develop a future-proof, independent, and consultative ecosystem for the responsible use of AI in the defence domain. In order to achieve this, the lab shall devise a context-dependent methodology that focuses on the ‘analysis’, ‘design’ and ‘evaluation’ of ELSA of AI-based applications within the military context, which include inter alia unmanned aircraft. This is bolstered as the Lab also recognises and complements the existing methods in regards to human-machine teaming, explainable algorithms, and value-sensitive design. Such methods will be modified for the military context and applied to pertinent case-studies. These case-studies include, among others, the application of autonomous robots (incl. semi- autonomous) and AI-based methods against cognitive warfare. As the perception of the application of AI in the military context, by both society and defence personnel, is important, the Lab will study how these perceptions evolve and vary in different contexts. Furthermore, the Lab will monitor – as they may influence people’s perception – developments in the global technological, military and societal spheres. Although the emphasis of the research project is on different forms of AI in defence, it focuses on several case studies. One of these case studies is on unmanned aircraft, which will also be the focus of the paper. Hence, ethical, legal, and societal aspects of unmanned aircraft in the defence domain will be discussed in detail, including but not limited to privacy issues. Typical other issues concern security (for people, objects, data or other aircraft), privacy (sensitive data, hindrance, annoyance, data collection, function creep), chilling effects, PlayStation mentality, and PTSD.

Keywords: autonomous weapon systems, unmanned aircraft, human-machine teaming, meaningful human control, value-sensitive design

Procedia PDF Downloads 93
373 Analyzing Consumer Preferences and Brand Differentiation in the Notebook Market via Social Media Insights and Expert Evaluations

Authors: Mohammadreza Bakhtiari, Mehrdad Maghsoudi, Hamidreza Bakhtiari

Abstract:

This study investigates consumer behavior in the notebook computer market by integrating social media sentiment analysis with expert evaluations. The rapid evolution of the notebook industry has intensified competition among manufacturers, necessitating a deeper understanding of consumer priorities. Social media platforms, particularly Twitter, have become valuable sources for capturing real-time user feedback. In this research, sentiment analysis was performed on Twitter data gathered in the last two years, focusing on seven major notebook brands. The PyABSA framework was utilized to extract sentiments associated with various notebook components, including performance, design, battery life, and price. Expert evaluations, conducted using fuzzy logic, were incorporated to assess the impact of these sentiments on purchase behavior. To provide actionable insights, the TOPSIS method was employed to prioritize notebook features based on a combination of consumer sentiments and expert opinions. The findings consistently highlight price, display quality, and core performance components, such as RAM and CPU, as top priorities across brands. However, lower-priority features, such as webcams and cooling fans, present opportunities for manufacturers to innovate and differentiate their products. The analysis also reveals subtle but significant brand-specific variations, offering targeted insights for marketing and product development strategies. For example, Lenovo's strong performance in display quality points to a competitive edge, while Microsoft's lower ranking in battery life indicates a potential area for R&D investment. This hybrid methodology demonstrates the value of combining big data analytics with expert evaluations, offering a comprehensive framework for understanding consumer behavior in the notebook market. The study emphasizes the importance of aligning product development and marketing strategies with evolving consumer preferences, ensuring competitiveness in a dynamic market. It also underscores the potential for innovation in seemingly less important features, providing companies with opportunities to create unique selling points. By bridging the gap between consumer expectations and product offerings, this research equips manufacturers with the tools needed to remain agile in responding to market trends and enhancing customer satisfaction.

Keywords: consumer behavior, customer preferences, laptop industry, notebook computers, social media analytics, TOPSIS

Procedia PDF Downloads 24
372 Improving Literacy Level Through Digital Books for Deaf and Hard of Hearing Students

Authors: Majed A. Alsalem

Abstract:

In our contemporary world, literacy is an essential skill that enables students to increase their efficiency in managing the many assignments they receive that require understanding and knowledge of the world around them. In addition, literacy enhances student participation in society improving their ability to learn about the world and interact with others and facilitating the exchange of ideas and sharing of knowledge. Therefore, literacy needs to be studied and understood in its full range of contexts. It should be seen as social and cultural practices with historical, political, and economic implications. This study aims to rebuild and reorganize the instructional designs that have been used for deaf and hard-of-hearing (DHH) students to improve their literacy level. The most critical part of this process is the teachers; therefore, teachers will be the center focus of this study. Teachers’ main job is to increase students’ performance by fostering strategies through collaborative teamwork, higher-order thinking, and effective use of new information technologies. Teachers, as primary leaders in the learning process, should be aware of new strategies, approaches, methods, and frameworks of teaching in order to apply them to their instruction. Literacy from a wider view means acquisition of adequate and relevant reading skills that enable progression in one’s career and lifestyle while keeping up with current and emerging innovations and trends. Moreover, the nature of literacy is changing rapidly. The notion of new literacy changed the traditional meaning of literacy, which is the ability to read and write. New literacy refers to the ability to effectively and critically navigate, evaluate, and create information using a range of digital technologies. The term new literacy has received a lot of attention in the education field over the last few years. New literacy provides multiple ways of engagement, especially to those with disabilities and other diverse learning needs. For example, using a number of online tools in the classroom provides students with disabilities new ways to engage with the content, take in information, and express their understanding of this content. This study will provide teachers with the highest quality of training sessions to meet the needs of DHH students so as to increase their literacy levels. This study will build a platform between regular instructional designs and digital materials that students can interact with. The intervention that will be applied in this study will be to train teachers of DHH to base their instructional designs on the notion of Technology Acceptance Model (TAM) theory. Based on the power analysis that has been done for this study, 98 teachers are needed to be included in this study. This study will choose teachers randomly to increase internal and external validity and to provide a representative sample from the population that this study aims to measure and provide the base for future and further studies. This study is still in process and the initial results are promising by showing how students have engaged with digital books.

Keywords: deaf and hard of hearing, digital books, literacy, technology

Procedia PDF Downloads 489
371 Advancing the Analysis of Physical Activity Behaviour in Diverse, Rapidly Evolving Populations: Using Unsupervised Machine Learning to Segment and Cluster Accelerometer Data

Authors: Christopher Thornton, Niina Kolehmainen, Kianoush Nazarpour

Abstract:

Background: Accelerometers are widely used to measure physical activity behavior, including in children. The traditional method for processing acceleration data uses cut points, relying on calibration studies that relate the quantity of acceleration to energy expenditure. As these relationships do not generalise across diverse populations, they must be parametrised for each subpopulation, including different age groups, which is costly and makes studies across diverse populations difficult. A data-driven approach that allows physical activity intensity states to emerge from the data under study without relying on parameters derived from external populations offers a new perspective on this problem and potentially improved results. We evaluated the data-driven approach in a diverse population with a range of rapidly evolving physical and mental capabilities, namely very young children (9-38 months old), where this new approach may be particularly appropriate. Methods: We applied an unsupervised machine learning approach (a hidden semi-Markov model - HSMM) to segment and cluster the accelerometer data recorded from 275 children with a diverse range of physical and cognitive abilities. The HSMM was configured to identify a maximum of six physical activity intensity states and the output of the model was the time spent by each child in each of the states. For comparison, we also processed the accelerometer data using published cut points with available thresholds for the population. This provided us with time estimates for each child’s sedentary (SED), light physical activity (LPA), and moderate-to-vigorous physical activity (MVPA). Data on the children’s physical and cognitive abilities were collected using the Paediatric Evaluation of Disability Inventory (PEDI-CAT). Results: The HSMM identified two inactive states (INS, comparable to SED), two lightly active long duration states (LAS, comparable to LPA), and two short-duration high-intensity states (HIS, comparable to MVPA). Overall, the children spent on average 237/392 minutes per day in INS/SED, 211/129 minutes per day in LAS/LPA, and 178/168 minutes in HIS/MVPA. We found that INS overlapped with 53% of SED, LAS overlapped with 37% of LPA and HIS overlapped with 60% of MVPA. We also looked at the correlation between the time spent by a child in either HIS or MVPA and their physical and cognitive abilities. We found that HIS was more strongly correlated with physical mobility (R²HIS =0.5, R²MVPA= 0.28), cognitive ability (R²HIS =0.31, R²MVPA= 0.15), and age (R²HIS =0.15, R²MVPA= 0.09), indicating increased sensitivity to key attributes associated with a child’s mobility. Conclusion: An unsupervised machine learning technique can segment and cluster accelerometer data according to the intensity of movement at a given time. It provides a potentially more sensitive, appropriate, and cost-effective approach to analysing physical activity behavior in diverse populations, compared to the current cut points approach. This, in turn, supports research that is more inclusive across diverse populations.

Keywords: physical activity, machine learning, under 5s, disability, accelerometer

Procedia PDF Downloads 210
370 Branding in FMCG Sector in India: A Comparison of Indian and Multinational Companies

Authors: Pragati Sirohi, Vivek Singh Rana

Abstract:

Brand is a name, term, sign, symbol or design or a combination of all these which is intended to identify the goods or services of one seller or a group of sellers and to differentiate them from those of the competitors and perception influences purchase decisions here and so building that perception is critical. The FMCG industry is a low margin business. Volumes hold the key to success in this industry. Therefore, the industry has a strong emphasis on marketing. Creating strong brands is important for FMCG companies and they devote considerable money and effort in developing brands. Brand loyalty is fickle. Companies know this and that is why they relentlessly work towards brand building. The purpose of the study is a comparison between Indian and Multinational companies with regard to FMCG sector in India. It has been hypothesized that after liberalization the Indian companies has taken up the challenge of globalization and some of these are giving a stiff competition to MNCs. There is an existence of strong brand image of MNCs compared to Indian companies. Advertisement expenditures of MNCs are proportionately higher compared to Indian counterparts. The operational area of the study is the country as a whole. Continuous time series data is available from 1996-2014 for the selected 8 companies. The selection of these companies is done on the basis of their large market share, brand equity and prominence in the market. Research methodology focuses on finding trend growth rates of market capitalization, net worth, and brand values through regression analysis by the usage of secondary data from prowess database developed by CMIE (Centre for monitoring Indian Economy). Estimation of brand values of selected FMCG companies is being attempted, which can be taken to be the excess of market capitalization over the net worth of a company. Brand value indices are calculated. Correlation between brand values and advertising expenditure is also measured to assess the effect of advertising on branding. Major results indicate that although MNCs enjoy stronger brand image but few Indian companies like ITC is the outstanding leader in terms of its market capitalization and brand values. Dabur and Tata Global Beverages Ltd are competing equally well on these values. Advertisement expenditures are the highest for HUL followed by ITC, Colgate and Dabur which shows that Indian companies are not behind in the race. Although advertisement expenditures are playing a role in brand building process there are many other factors which affect the process. Also, brand values are decreasing over the years for FMCG companies in India which show that competition is intense with aggressive price wars and brand clutter. Implications for Indian companies are that they have to consistently put in proactive and relentless efforts in their brand building process. Brands need focus and consistency. Brand longevity without innovation leads to brand respect but does not create brand value.

Keywords: brand value, FMCG, market capitalization, net worth

Procedia PDF Downloads 356
369 A Sustainability Benchmarking Framework Based on the Life Cycle Sustainability Assessment: The Case of the Italian Ceramic District

Authors: A. M. Ferrari, L. Volpi, M. Pini, C. Siligardi, F. E. Garcia Muina, D. Settembre Blundo

Abstract:

A long tradition in the ceramic manufacturing since the 18th century, primarily due to the availability of raw materials and an efficient transport system, let to the birth and development of the Italian ceramic tiles district that nowadays represents a reference point for this sector even at global level. This economic growth has been coupled to attention towards environmental sustainability issues throughout various initiatives undertaken over the years at the level of the production sector, such as certification activities and sustainability policies. In this way, starting from an evaluation of the sustainability in all its aspects, the present work aims to develop a benchmarking helping both producers and consumers. In the present study, throughout the Life Cycle Sustainability Assessment (LCSA) framework, the sustainability has been assessed in all its dimensions: environmental with the Life Cycle Assessment (LCA), economic with the Life Cycle Costing (LCC) and social with the Social Life Cycle Assessment (S-LCA). The annual district production of stoneware tiles during the 2016 reference year has been taken as reference flow for all the three assessments, and the system boundaries cover the entire life cycle of the tiles, except for the LCC for which only the production costs have been considered at the moment. In addition, a preliminary method for the evaluation of local and indoor emissions has been introduced in order to assess the impact due to atmospheric emissions on both people living in the area surrounding the factories and workers. The Life Cycle Assessment results, obtained from IMPACT 2002+ modified assessment method, highlight that the manufacturing process is responsible for the main impact, especially because of atmospheric emissions at a local scale, followed by the distribution to end users, the installation and the ordinary maintenance of the tiles. With regard to the economic evaluation, both the internal and external costs have been considered. For the LCC, primary data from the analysis of the financial statements of Italian ceramic companies show that the higher cost items refer to expenses for goods and services and costs of human resources. The analysis of externalities with the EPS 2015dx method attributes the main damages to the distribution and installation of the tiles. The social dimension has been investigated with a preliminary approach by using the Social Hotspots Database, and the results indicate that the most affected damage categories are health and safety and labor rights and decent work. This study shows the potential of the LCSA framework applied to an industrial sector; in particular, it can be a useful tool for building a comprehensive benchmark for the sustainability of the ceramic industry, and it can help companies to actively integrate sustainability principles into their business models.

Keywords: benchmarking, Italian ceramic industry, life cycle sustainability assessment, porcelain stoneware tiles

Procedia PDF Downloads 127
368 Additive Manufacturing with Ceramic Filler

Authors: Irsa Wolfram, Boruch Lorenz

Abstract:

Innovative solutions with additive manufacturing applying material extrusion for functional parts necessitate innovative filaments with persistent quality. Uniform homogeneity and a consistent dispersion of particles embedded in filaments generally require multiple cycles of extrusion or well-prepared primal matter by injection molding, kneader machines, or mixing equipment. These technologies commit to dedicated equipment that is rarely at the disposal in production laboratories unfamiliar with research in polymer materials. This stands in contrast to laboratories that investigate complex material topics and technology science to leverage the potential of 3-D printing. Consequently, scientific studies in labs are often constrained to compositions and concentrations of fillersofferedfrom the market. Therefore, we introduce a prototypal laboratory methodology scalable to tailoredprimal matter for extruding ceramic composite filaments with fused filament fabrication (FFF) technology. - A desktop single-screw extruder serves as a core device for the experiments. Custom-made filaments encapsulate the ceramic fillers and serve with polylactide (PLA), which is a thermoplastic polyester, as primal matter and is processed in the melting area of the extruder, preserving the defined concentration of the fillers. Validated results demonstrate that this approach enables continuously produced and uniform composite filaments with consistent homogeneity. Itis 3-D printable with controllable dimensions, which is a prerequisite for any scalable application. Additionally, digital microscopy confirms the steady dispersion of the ceramic particles in the composite filament. - This permits a 2D reconstruction of the planar distribution of the embedded ceramic particles in the PLA matrices. The innovation of the introduced method lies in the smart simplicity of preparing the composite primal matter. It circumvents the inconvenience of numerous extrusion operations and expensive laboratory equipment. Nevertheless, it deliversconsistent filaments of controlled, predictable, and reproducible filler concentration, which is the prerequisite for any industrial application. The introduced prototypal laboratory methodology seems capable for other polymer matrices and suitable to further utilitarian particle types beyond and above ceramic fillers. This inaugurates a roadmap for supplementary laboratory development of peculiar composite filaments, providing value for industries and societies. This low-threshold entry of sophisticated preparation of composite filaments - enabling businesses to create their own dedicated filaments - will support the mutual efforts for establishing 3D printing to new functional devices.

Keywords: additive manufacturing, ceramic composites, complex filament, industrial application

Procedia PDF Downloads 106
367 The Lonely Entrepreneur: Antecedents and Effects of Social Isolation on Entrepreneurial Intention and Output

Authors: Susie Pryor, Palak Sadhwani

Abstract:

The purpose of this research is to provide the foundations for a broad research agenda examining the role loneliness plays in entrepreneurship. While qualitative research in entrepreneurship incidentally captures the existence of loneliness as a part of the lived reality of entrepreneurs, to the authors’ knowledge, no academic work has to date explored this construct in this context. Moreover, many individuals reporting high levels of loneliness (women, ethnic minorities, immigrants, low income, low education) reflect those who are currently driving small business growth in the United States. Loneliness is a persistent state of emotional distress which results from feelings of estrangement and rejection or develops in the absence of social relationships and interactions. Empirical work finds links between loneliness and depression, suicide and suicide ideation, anxiety, hostility and passiveness, lack of communication and adaptability, shyness, poor social skills and unrealistic social perceptions, self-doubts, fear of rejection, and negative self-evaluation. Lonely individuals have been found to exhibit lower levels of self-esteem, higher levels of introversion, lower affiliative tendencies, less assertiveness, higher sensitivity to rejection, a heightened external locus of control, intensified feelings of regret and guilt over past events and rigid and overly idealistic goals concerning the future. These characteristics are likely to impact entrepreneurs and their work. Research identifies some key dangers of loneliness. Loneliness damages human love and intimacy, can disturb and distract individuals from channeling creative and effective energies in a meaningful way, may result in the formation of premature, poorly thought out and at times even irresponsible decisions, and produce hard and desensitized individuals, with compromised health and quality of life concerns. The current study utilizes meta-analysis and text analytics to distinguish loneliness from other related constructs (e.g., social isolation) and categorize antecedents and effects of loneliness across subpopulations. This work has the potential to materially contribute to the field of entrepreneurship by cleanly defining constructs and providing foundational background for future research. It offers a richer understanding of the evolution of loneliness and related constructs over the life cycle of entrepreneurial start-up and development. Further, it suggests preliminary avenues for exploration and methods of discovery that will result in knowledge useful to the field of entrepreneurship. It is useful to both entrepreneurs and those work with them as well as academics interested in the topics of loneliness and entrepreneurship. It adopts a grounded theory approach.

Keywords: entrepreneurship, grounded theory, loneliness, meta-analysis

Procedia PDF Downloads 112
366 Carbon Capture and Storage by Continuous Production of CO₂ Hydrates Using a Network Mixing Technology

Authors: João Costa, Francisco Albuquerque, Ricardo J. Santos, Madalena M. Dias, José Carlos B. Lopes, Marcelo Costa

Abstract:

Nowadays, it is well recognized that carbon dioxide emissions, together with other greenhouse gases, are responsible for the dramatic climate changes that have been occurring over the past decades. Gas hydrates are currently seen as a promising and disruptive set of materials that can be used as a basis for developing new technologies for CO₂ capture and storage. Its potential as a clean and safe pathway for CCS is tremendous since it requires only water and gas to be mixed under favorable temperatures and mild high pressures. However, the hydrates formation process is highly exothermic; it releases about 2 MJ per kilogram of CO₂, and it only occurs in a narrow window of operational temperatures (0 - 10 °C) and pressures (15 to 40 bar). Efficient continuous hydrate production at a specific temperature range necessitates high heat transfer rates in mixing processes. Past technologies often struggled to meet this requirement, resulting in low productivity or extended mixing/contact times due to inadequate heat transfer rates, which consistently posed a limitation. Consequently, there is a need for more effective continuous hydrate production technologies in industrial applications. In this work, a network mixing continuous production technology has been shown to be viable for producing CO₂ hydrates. The structured mixer used throughout this work consists of a network of unit cells comprising mixing chambers interconnected by transport channels. These mixing features result in enhanced heat and mass transfer rates and high interfacial surface area. The mixer capacity emerges from the fact that, under proper hydrodynamic conditions, the flow inside the mixing chambers becomes fully chaotic and self-sustained oscillatory flow, inducing intense local laminar mixing. The device presents specific heat transfer rates ranging from 107 to 108 W⋅m⁻³⋅K⁻¹. A laboratory scale pilot installation was built using a device capable of continuously capturing 1 kg⋅h⁻¹ of CO₂, in an aqueous slurry of up to 20% in mass. The strong mixing intensity has proven to be sufficient to enhance dissolution and initiate hydrate crystallization without the need for external seeding mechanisms and to achieve, at the device outlet, conversions of 99% in CO₂. CO₂ dissolution experiments revealed that the overall liquid mass transfer coefficient is orders of magnitude larger than in similar devices with the same purpose, ranging from 1 000 to 12 000 h⁻¹. The present technology has shown itself to be capable of continuously producing CO₂ hydrates. Furthermore, the modular characteristics of the technology, where scalability is straightforward, underline the potential development of a modular hydrate-based CO₂ capture process for large-scale applications.

Keywords: network, mixing, hydrates, continuous process, carbon dioxide

Procedia PDF Downloads 52
365 Monitoring of Indoor Air Quality in Museums

Authors: Olympia Nisiforou

Abstract:

The cultural heritage of each country represents a unique and irreplaceable witness of the past. Nevertheless, on many occasions, such heritage is extremely vulnerable to natural disasters and reckless behaviors. Even if such exhibits are now located in Museums, they still receive insufficient protection due to improper environmental conditions. These external changes can negatively affect the conditions of the exhibits and contribute to inefficient maintenance in time. Hence, it is imperative to develop an innovative, low-cost system, to monitor indoor air quality systematically, since conventional methods are quite expensive and time-consuming. The present study gives an insight into the indoor air quality of the National Byzantine Museum of Cyprus. In particular, systematic measurements of particulate matter, bio-aerosols, the concentration of targeted chemical pollutants (including Volatile organic compounds (VOCs), temperature, relative humidity, and lighting conditions as well as microbial counts have been performed using conventional techniques. Measurements showed that most of the monitored physiochemical parameters did not vary significantly within the various sampling locations. Seasonal fluctuations of ammonia were observed, showing higher concentrations in the summer and lower in winter. It was found that the outdoor environment does not significantly affect indoor air quality in terms of VOC and Nitrogen oxides (NOX). A cutting-edge portable Gas Chromatography-Mass Spectrometry (GC-MS) system (TORION T-9) was used to identify and measure the concentrations of specific Volatile and Semi-volatile Organic Compounds. A large number of different VOCs and SVOCs found such as Benzene, Toluene, Xylene, Ethanol, Hexadecane, and Acetic acid, as well as some more complex compounds such as 3-ethyl-2,4-dimethyl-Isopropyl alcohol, 4,4'-biphenylene-bis-(3-aminobenzoate) and trifluoro-2,2-dimethylpropyl ester. Apart from the permanent indoor/outdoor sources (i.e., wooden frames, painted exhibits, carpets, ventilation system and outdoor air) of the above organic compounds, the concentration of some of them within the areas of the museum were found to increase when large groups of visitors were simultaneously present at a specific place within the museum. The high presence of Particulate Matter (PM), fungi and bacteria were found in the museum’s areas where carpets were present but low colonial counts were found in rooms where artworks are exhibited. Measurements mentioned above were used to validate an innovative low-cost air-quality monitoring system that has been developed within the present work. The developed system is able to monitor the average concentrations (on a bidaily basis) of several pollutants and presents several innovative features, including the prompt alerting in case of increased average concentrations of monitored pollutants, i.e., exceeding the limit values defined by the user.

Keywords: exibitions, indoor air quality , VOCs, pollution

Procedia PDF Downloads 123
364 SME Internationalisation and Its Financing: An Exploratory Study That Analyses Government Support and Funding Mechanisms for Irish and Scottish International SMEs

Authors: L. Spencer, S. O’ Donohoe

Abstract:

Much of the research to date on internationalisation relates to large firms with much less known about how small and medium-sized enterprises (SMEs) engage in internationalisation. Given the crucial role of SMEs in contributing to economic growth, there is now an emphasis on the need for SMEs internationalise. Yet little is known about how SMEs undertake and finance such expansion and whether or not internationalisation actually hinders or helps them in securing finance. The purpose of this research is to explore the internationalisation process for SMEs, the sources of funding used in financing this expansion and support received from the state agencies in assisting their overseas expansion. A conceptual framework has been devised which marries the two strands of literature together (internationalisation and financing the firm). The exploratory nature of this research dictates that the most appropriate methodology was to use semi-structured interviews with SME owners; bank representatives and support agencies. In essence, a triangulated approach to the research problem facilitates assessment of the perceptions and experiences from firms, the state and the financial institutions. Our sample is drawn from SMEs operating in Ireland and Scotland, two small but very open economies where SMEs are the dominant form of organisation. The sample includes a range of industry sectors. Key findings to date suggest some SMEs are born global; others are born again global whilst a significant cohort can be classed as traditional internationalisers. Unsurprisingly there is a strong industry effect with firms in the high tech sector more likely to be faster internationalisers in contrast to those in the traditional manufacturing sectors. Owner manager’s own funds are deemed key to financing initial internationalisation lending support for the financial growth life cycle model albeit more important for the faster internationalisers in contrast to the slower cohort who are more likely to deploy external sources especially bank finance. Retained earnings remain the predominant source of on-going financing for internationalising firms but trade credit is often used and invoice discounting is utilised quite frequently. In terms of lending, asset based lending backed by personal guarantees appears paramount for securing bank finance. Whilst the lack of diversified sources of funding for internationalising SMEs was found in both jurisdictions there appears no evidence to suggest that internationalisation impedes firms in securing finance. Finally state supports were cited as important to the internationalisation process, in particular those provided by Enterprise Ireland were deemed very valuable. Considering the paucity of studies to date on SME internationalisation and in particular the funding mechanisms deployed by them; this study seeks to contribute to the body of knowledge in both the international business and finance disciplines.

Keywords: funding, government support, international pathways, modes of entry

Procedia PDF Downloads 245
363 An Integrated Approach to Handle Sour Gas Transportation Problems and Pipeline Failures

Authors: Venkata Madhusudana Rao Kapavarapu

Abstract:

The Intermediate Slug Catcher (ISC) facility was built to process nominally 234 MSCFD of export gas from the booster station on a day-to-day basis and to receive liquid slugs up to 1600 m³ (10,000 BBLS) in volume when the incoming 24” gas pipelines are pigged following upsets or production of non-dew-pointed gas from gathering centers. The maximum slug sizes expected are 812 m³ (5100 BBLS) in winter and 542 m³ (3400 BBLS) in summer after operating for a month or more at 100 MMSCFD of wet gas, being 60 MMSCFD of treated gas from the booster station, combined with 40 MMSCFD of untreated gas from gathering center. The water content is approximately 60% but may be higher if the line is not pigged for an extended period, owing to the relative volatility of the condensate compared to water. In addition to its primary function as a slug catcher, the ISC facility will receive pigged liquids from the upstream and downstream segments of the 14” condensate pipeline, returned liquids from the AGRP, pigged through the 8” pipeline, and blown-down fluids from the 14” condensate pipeline prior to maintenance. These fluids will be received in the condensate flash vessel or the condensate separator, depending on the specific operation, for the separation of water and condensate and settlement of solids scraped from the pipelines. Condensate meeting the colour and 200 ppm water specifications will be dispatched to the AGRP through the 14” pipeline, while off-spec material will be returned to BS-171 via the existing 10” condensate pipeline. When they are not in operation, the existing 24” export gas pipeline and the 10” condensate pipeline will be maintained under export gas pressure, ready for operation. The gas manifold area contains the interconnecting piping and valves needed to align the slug catcher with either of the 24” export gas pipelines from the booster station and to direct the gas to the downstream segment of either of these pipelines. The manifold enables the slug catcher to be bypassed if it needs to be maintained or if through-pigging of the gas pipelines is to be performed. All gas, whether bypassing the slug catcher or returning to the gas pipelines from it, passes through black powder filters to reduce the level of particulates in the stream. These items are connected to the closed drain vessel to drain the liquid collected. Condensate from the booster station is transported to AGRP through 14” condensate pipeline. The existing 10” condensate pipeline will be used as a standby and for utility functions such as returning condensate from AGRP to the ISC or booster station or for transporting off-spec fluids from the ISC back to booster station. The manifold contains block valves that allow the two condensate export lines to be segmented at the ISC, thus facilitating bi-directional flow independently in the upstream and downstream segments, which ensures complete pipeline integrity and facility integrity. Pipeline failures will be attended to with the latest technologies by remote techno plug techniques, and repair activities will be carried out as needed. Pipeline integrity will be evaluated with ili pigging to estimate the pipeline conditions.

Keywords: integrity, oil & gas, innovation, new technology

Procedia PDF Downloads 72
362 Control of Belts for Classification of Geometric Figures by Artificial Vision

Authors: Juan Sebastian Huertas Piedrahita, Jaime Arturo Lopez Duque, Eduardo Luis Perez Londoño, Julián S. Rodríguez

Abstract:

The process of generating computer vision is called artificial vision. The artificial vision is a branch of artificial intelligence that allows the obtaining, processing, and analysis of any type of information especially the ones obtained through digital images. Actually the artificial vision is used in manufacturing areas for quality control and production, as these processes can be realized through counting algorithms, positioning, and recognition of objects that can be measured by a single camera (or more). On the other hand, the companies use assembly lines formed by conveyor systems with actuators on them for moving pieces from one location to another in their production. These devices must be previously programmed for their good performance and must have a programmed logic routine. Nowadays the production is the main target of every industry, quality, and the fast elaboration of the different stages and processes in the chain of production of any product or service being offered. The principal base of this project is to program a computer that recognizes geometric figures (circle, square, and triangle) through a camera, each one with a different color and link it with a group of conveyor systems to organize the mentioned figures in cubicles, which differ from one another also by having different colors. This project bases on artificial vision, therefore the methodology needed to develop this project must be strict, this one is detailed below: 1. Methodology: 1.1 The software used in this project is QT Creator which is linked with Open CV libraries. Together, these tools perform to realize the respective program to identify colors and forms directly from the camera to the computer. 1.2 Imagery acquisition: To start using the libraries of Open CV is necessary to acquire images, which can be captured by a computer’s web camera or a different specialized camera. 1.3 The recognition of RGB colors is realized by code, crossing the matrices of the captured images and comparing pixels, identifying the primary colors which are red, green, and blue. 1.4 To detect forms it is necessary to realize the segmentation of the images, so the first step is converting the image from RGB to grayscale, to work with the dark tones of the image, then the image is binarized which means having the figure of the image in a white tone with a black background. Finally, we find the contours of the figure in the image to detect the quantity of edges to identify which figure it is. 1.5 After the color and figure have been identified, the program links with the conveyor systems, which through the actuators will classify the figures in their respective cubicles. Conclusions: The Open CV library is a useful tool for projects in which an interface between a computer and the environment is required since the camera obtains external characteristics and realizes any process. With the program for this project any type of assembly line can be optimized because images from the environment can be obtained and the process would be more accurate.

Keywords: artificial intelligence, artificial vision, binarized, grayscale, images, RGB

Procedia PDF Downloads 378
361 The Impact of Encapsulated Raspberry Juice on the Surface Colour of Enriched White Chocolate

Authors: Ivana Loncarevic, Biljana Pajin, Jovana Petrovic, Aleksandar Fistes, Vesna Tumbas Saponjac, Danica Zaric

Abstract:

Chocolate is a complex rheological system usually defined as a suspension consisting of non-fat particles dispersed in cocoa butter as a continuous fat phase. Dark chocolate possesses polyphenols as major constituents whose dietary consumption has been associated with beneficial effects. Milk chocolate is formulated with a lower percentage of cocoa bean liquor than dark chocolate and it often contains lower amounts of polyphenols, while in white chocolate the fat-free cocoa solids are left out completely. Following the current trend of development of functional foods, there is an idea to create enriched white chocolate with the addition of encapsulated bioactive compounds from berry fruits. The aim of this study was to examine the surface colour of enriched white chocolate with the addition of 6, 8, and 10% of raspberry juice encapsulated in maltodextrins, in order to preserve the stability, bioactivity, and bioavailability of the active ingredients. The surface color of samples was measured by MINOLTA Chroma Meter CR-400 (Minolta Co., Ltd., Osaka, Japan) using D 65 lighting, a 2º standard observer angle and an 8-mm aperture in the measuring head. The following CIELab color coordinates were determined: L* – lightness, a* – redness to greenness and b* – yellowness to blueness. The addition of raspberry encapsulates led to the creation of new type of enriched chocolate. Raspberry encapsulate changed the values of the lightness (L*), a* (red tone) and b* (yellow tone) measured on the surface of enriched chocolate in accordance with applied concentrations. White chocolate has significantly (p < 0.05) highest L* (74.6) and b* (20.31) values of all samples indicating the bright surface of the white chocolate, as well as a high share of a yellow tone. At the same time, white chocolate has the negative a* value (-1.00) on its surface which includes green tones. Raspberry juice encapsulate has the darkest surface with significantly (p < 0.05) lowest value of L* (42.75), where increasing of its concentration in enriched chocolates decreases their L* values. Chocolate with 6% of encapsulate has significantly (p < 0.05) highest value of L* (60.56) in relation to enriched chocolate with 8% of encapsulate (53.57), and 10% of encapsulate (51.01). a* value measured on the surface of white chocolate is negative (-1.00) tending towards green tones. Raspberry juice encapsulates increases red tone in enriched chocolates in accordance with the added amounts (23.22, 30.85, and 33.32 in enriched chocolates with 6, 8, and 10% encapsulated raspberry juice, respectively). The presence of yellow tones in enriched chocolates significantly (p < 0.05) decreases with the addition of E (with b* value 5.21), from 10.01 in enriched chocolate with a minimal amount of raspberry juice encapsulates to 8.91 in chocolate with a maximum concentration of raspberry juice encapsulate. The addition of encapsulated raspberry juice to white chocolate led to the creation of new type of enriched chocolate with attractive color. The research in this paper was conducted within the project titled ‘Development of innovative chocolate products fortified with bioactive compounds’ (Innovation Fund Project ID 50051).

Keywords: color, encapsulated raspberry juice, polyphenols, white chocolate

Procedia PDF Downloads 183
360 Self-Evaluation of the Foundation English Language Programme at the Center for Preparatory Studies Offered at the Sultan Qaboos University, Oman: Process and Findings

Authors: Meenalochana Inguva

Abstract:

The context: The Center for Preparatory study is one of the strongest and most vibrant academic teaching units of the Sultan Qaboos University (SQU). The Foundation Programme English Language (FPEL) is part of a larger foundation programme which was implemented at SQU in fall 2010. The programme has been designed to prepare the students who have been accepted to study in the university in order to achieve the required educational goals (the learning outcomes) that have been designed according to Oman Academic Standards and published by the Omani Authority for Academic Accreditation (OAAA) for the English language component. The curriculum: At the CPS, the English language curriculum is based on the learning outcomes drafted for each level. These learning outcomes guide the students in meeting what is expected of them by the end of each level. These six levels are progressive in nature and are seen as a continuum. The study: A periodic evaluation of language programmes is necessary to improve the quality of the programmes and to meet the set goals of the programmes. An evaluation may be carried out internally or externally depending on the purpose and context. A self-study programme was initiated at the beginning of spring semester 2015 with a team comprising a total of 11 members who worked with-in the assigned course areas (level and programme specific). Only areas specific to FPEL have been included in the study. The study was divided into smaller tasks and members focused on their assigned courses. The self-study primarily focused on analyzing the programme LOs, curriculum planning, materials used and their relevance against the GFP exit standards. The review team also reflected on the assessment methods and procedures followed to reflect on student learning. The team has paid attention to having standard criteria for assessment and transparency in procedures. A special attention was paid to the staging of LOs across levels to determine students’ language and study skills ability to cope with higher level courses. Findings: The findings showed that most of the LOs are met through the materials used for teaching. Students score low on objective tests and high on subjective tests. Motivated students take advantage of academic support activities others do not utilize the student support activities to their advantage. Reading should get more hours. In listening, the format of the listening materials in CT 2 does not match the test format. Some of the course materials need revision. For e.g. APA citation, referencing etc. No specific time is allotted for teaching grammar Conclusion: The findings resulted in taking actions in bridging gaps. It will also help the center to be better prepared for the external review of its FPEL curriculum. It will also provide a useful base to prepare for the self-study portfolio for GFP standards assessment and future audit.

Keywords: curriculum planning, learning outcomes, reflections, self-evaluation

Procedia PDF Downloads 226
359 Gas Metal Arc Welding of Clad Plates API 5L X-60/316L Applying External Magnetic Fields during Welding

Authors: Blanca A. Pichardo, Victor H. Lopez, Melchor Salazar, Rafael Garcia, Alberto Ruiz

Abstract:

Clad pipes in comparison to plain carbon steel pipes offer the oil and gas industry high corrosion resistance, reduction in economic losses due to pipeline failures and maintenance, lower labor risk, prevent pollution and environmental damage due to hydrocarbons spills caused by deteriorated pipelines. In this context, it is paramount to establish reliable welding procedures to join bimetallic plates or pipes. Thus, the aim of this work is to study the microstructure and mechanical behavior of clad plates welded by the gas metal arc welding (GMAW) process. A clad of 316L stainless steel was deposited onto API 5L X-60 plates by overlay welding with the GMAW process. Welding parameters were, 22.5 V, 271 A, heat input 1,25 kJ/mm, shielding gas 98% Ar + 2% O₂, reverse polarity, torch displacement speed 3.6 mm/s, feed rate 120 mm/s, electrode diameter 1.2 mm and application of an electromagnetic field of 3.5 mT. The overlay welds were subjected to macro-structural and microstructural characterization. After manufacturing the clad plates, a single V groove joint was machined with a 60° bevel and 1 mm root face. GMA welding of the bimetallic plates was performed in four passes with ER316L-Si filler for the root pass and an ER70s-6 electrode for the subsequent welding passes. For joining the clad plates, an electromagnetic field was applied with 2 purposes; to improve the microstructural characteristics and to assist the stability of the electric arc during welding in order to avoid magnetic arc blow. The welds were macro and microstructurally characterized and the mechanical properties were also evaluated. Vickers microhardness (100 g load for 10 s) measurements were made across the welded joints at three levels. The first profile, at the 316L stainless steel cladding, was quite even with a value of approximately 230 HV. The second microhardness profile showed high values in the weld metal, ~400 HV, this was due to the formation of a martensitic microstructure by dilution of the first welding pass with the second. The third profile crossed the third and fourth welding passes and an average value of 240 HV was measured. In the tensile tests, yield strength was between 400 to 450 MPa with a tensile strength of ~512 MPa. In the Charpy impact tests, the results were 86 and 96 J for specimens with the notch in the face and in the root of the weld bead, respectively. The results of the mechanical properties were in the range of the API 5L X-60 base material. The overlap welding process used for cladding is not suitable for large components, however, it guarantees a metallurgical bond, unlike the most commonly used processes such as thermal expansion. For welding bimetallic plates, control of the temperature gradients is key to avoid distortions. Besides, the dissimilar nature of the bimetallic plates gives rise to the formation of a martensitic microstructure during welding.

Keywords: clad pipe, dissimilar welding, gas metal arc welding, magnetic fields

Procedia PDF Downloads 152
358 Nurturing Students' Creativity through Engagement in Problem Posing and Self-Assessment of Its Development

Authors: Atara Shriki, Ilana Lavy

Abstract:

In a rapidly changing technological society, creativity is considered as an engine of economic and social progress. No doubt the education system has a central role in nurturing all students’ creativity, however, it is normally not encouraged at school. The causes of this reality are related to a variety of circumstances, among them: external pressures to cover the curriculum and succeed in standardized tests that mostly require algorithmic thinking and implementation of rules; teachers’ tendency to teach similarly to the way they themselves were taught as school students; relating creativity to giftedness, and therefore avoid nurturing all students' creativity; lack of adequate learning materials and accessible tools for following and evaluating the development of students’ creativity; and more. Since success in academic studies requires, among other things, creativity, lecturers in higher education institutions should consider appropriate ways to nurture students’ creative thinking and assess its development. Obviously, creativity has a multifaceted nature, numerous definitions, various perspectives for studying its essence (e.g., process, personality, environment, and product), and several approaches aimed at evaluating and assessing creative expressions (e.g., cognitive, social-personal, and psychometric). In this framework, we suggest nurturing students’ creativity through engaging them in problem posing activities that are part of inquiry assignments. In order to assess the development of their creativity, we propose to employ a model that was designed for this purpose, based on the psychometric approach, viewing the posed problems as the “creative product”. The model considers four measurable aspects- fluency, flexibility, originality, and organization, as well as a total score of creativity that reflects the relative weights of each aspect. The scores given to learners are of two types: (1) Total scores- the absolute number of posed problems with respect to each of the four aspects, and a final score of creativity; (2) Relative scores- each absolute number is transformed into a number that relates to the relative infrequency of the posed problems in student’s reference group. Through converting the scores received over time into a graphical display, students can assess their progress both with respect to themselves and relative to their reference group. Course lecturers can get a picture of the strengths and weaknesses of each student as well as the class as a whole, and to track changes that occur over time in response to the learning environment they had generated. Such tracking may assist lecturers in making pedagogical decisions about emphases that should be put on one or more aspects of creativity, and about the students that should be given a special attention. Our experience indicates that schoolteachers and lecturers in higher education institutes find the combination of engaging learners in problem posing along with self-assessment of their progress through utilizing the graphical display of accumulating total and relative scores has the potential to realize most learners’ creative potential.

Keywords: creativity, problem posing, psychometric model, self-assessment

Procedia PDF Downloads 319
357 A Qualitative Study to Analyze Clinical Coders’ Decision Making Process of Adverse Drug Event Admissions

Authors: Nisa Mohan

Abstract:

Clinical coding is a feasible method for estimating the national prevalence of adverse drug event (ADE) admissions. However, under-coding of ADE admissions is a limitation of this method. Whilst the under-coding will impact the accurate estimation of the actual burden of ADEs, the feasibility of the coded data in estimating the adverse drug event admissions goes much further compared to the other methods. Therefore, it is necessary to know the reasons for the under-coding in order to improve the clinical coding of ADE admissions. The ability to identify the reasons for the under-coding of ADE admissions rests on understanding the decision-making process of coding ADE admissions. Hence, the current study aimed to explore the decision-making process of clinical coders when coding cases of ADE admissions. Clinical coders from different levels of coding job such as trainee, intermediate and advanced level coders were purposefully selected for the interviews. Thirteen clinical coders were recruited from two Auckland region District Health Board hospitals for the interview study. Semi-structured, one-on-one, face-to-face interviews using open-ended questions were conducted with the selected clinical coders. Interviews were about 20 to 30 minutes long and were audio-recorded with the approval of the participants. The interview data were analysed using a general inductive approach. The interviews with the clinical coders revealed that the coders have targets to meet, and they sometimes hesitate to adhere to the coding standards. Coders deviate from the standard coding processes to make a decision. Coders avoid contacting the doctors for clarifying small doubts such as ADEs and the name of the medications because of the delay in getting a reply from the doctors. They prefer to do some research themselves or take help from their seniors and colleagues for making a decision because they can avoid a long wait to get a reply from the doctors. Coders think of ADE as a small thing. Lack of time for searching for information to confirm an ADE admission, inadequate communication with clinicians, along with coders’ belief that an ADE is a small thing may contribute to the under-coding of the ADE admissions. These findings suggest that further work is needed on interventions to improve the clinical coding of ADE admissions. Providing education to coders about the importance of ADEs, educating clinicians about the importance of clear and confirmed medical records entries, availing pharmacists’ services to improve the detection and clear documentation of ADE admissions, and including a mandatory field in the discharge summary about external causes of diseases may be useful for improving the clinical coding of ADE admissions. The findings of the research will help the policymakers to make informed decisions about the improvements. This study urges the coding policymakers, auditors, and trainers to engage with the unconscious cognitive biases and short-cuts of the clinical coders. This country-specific research conducted in New Zealand may also benefit other countries by providing insight into the clinical coding of ADE admissions and will offer guidance about where to focus changes and improvement initiatives.

Keywords: adverse drug events, clinical coders, decision making, hospital admissions

Procedia PDF Downloads 120
356 Audit and Assurance Program for AI-Based Technologies

Authors: Beatrice Arthur

Abstract:

The rapid development of artificial intelligence (AI) has transformed various industries, enabling faster and more accurate decision-making processes. However, with these advancements come increased risks, including data privacy issues, systemic biases, and challenges related to transparency and accountability. As AI technologies become more integrated into business processes, there is a growing need for comprehensive auditing and assurance frameworks to manage these risks and ensure ethical use. This paper provides a literature review on AI auditing and assurance programs, highlighting the importance of adapting traditional audit methodologies to the complexities of AI-driven systems. Objective: The objective of this review is to explore current AI audit practices and their role in mitigating risks, ensuring accountability, and fostering trust in AI systems. The study aims to provide a structured framework for developing audit programs tailored to AI technologies while also investigating how AI impacts governance, risk management, and regulatory compliance in various sectors. Methodology: This research synthesizes findings from academic publications and industry reports from 2014 to 2024, focusing on the intersection of AI technologies and IT assurance practices. The study employs a qualitative review of existing audit methodologies and frameworks, particularly the COBIT 2019 framework, to understand how audit processes can be aligned with AI governance and compliance standards. The review also considers real-time auditing as an emerging necessity for influencing AI system design during early development stages. Outcomes: Preliminary findings indicate that while AI auditing is still in its infancy, it is rapidly gaining traction as both a risk management strategy and a potential driver of business innovation. Auditors are increasingly being called upon to develop controls that address the ethical and operational risks posed by AI systems. The study highlights the need for continuous monitoring and adaptable audit techniques to handle the dynamic nature of AI technologies. Future Directions: Future research will explore the development of AI-specific audit tools and real-time auditing capabilities that can keep pace with evolving technologies. There is also a need for cross-industry collaboration to establish universal standards for AI auditing, particularly in high-risk sectors like healthcare and finance. Further work will involve engaging with industry practitioners and policymakers to refine the proposed governance and audit frameworks. Funding/Support Acknowledgements: This research is supported by the Information Systems Assurance Management Program at Concordia University of Edmonton.

Keywords: AI auditing, assurance, risk management, governance, COBIT 2019, transparency, accountability, machine learning, compliance

Procedia PDF Downloads 24
355 The Institutional Change Occurring in the Chinese Sport Sector: A Case Study on the Chinese Football Association Reform

Authors: Qi Peng

Abstract:

The Chinese sport sector is currently undergoing a dramatic institutional change. A sport system that was heavily dominated by the government is starting to shift towards one that is driven by the market. During the past sixty years, the Chinese Football Association (CFA), although ostensibly a ‘non-governmental organization’, has been in fact operated under the close supervision and control of the government. The double-identity of CFA has taken most of the blame for the poor performance of the Chinese football team, especially the men’s team. In 2015, a policy initiated by the Chinese government introduced a potentially radical change to the institutional structure of CFA by delegating the power of government agency – the General Administration of Sport of China - to the organization (CFA) itself. Against such background, an overarching research question was brought up- will an organization remained institutionalized within the system change in response to the external (policy) jolt? To answer this question, three principal data collection methods were employed: document review, participant observation and semi-structured interviews. Document review provides the mapping of the structural and cultural framework in which the CFA functions during the change process. The author have had the chance to interact closely with the organization as participant observer in the organization for a period of time, long enough to collect the data, but never too long to get biased view of the situation. This stage enables the author to gain an in-depth understanding of how CFA managed to restructure the governance and legitimacy. Conducting semi-structured interviews with staff within the CFA and from staff within selected stakeholders of CFA also provided a crucial step to gain an insight into the factors for change as well as the implications of the change. A wide range of interviewees that have been and to be interviewed include: CFA members (senior officials and staff); local football associations members; senior Chinese Super League football club managers; CFA Super League Co., LTD (senior officials and staff); CSL broadcasters; Chinese Olympic Committee members. The preliminary research data shows that the CFA is currently undergoing two levels of change: although the settings of CFA has been gradually restructured (organizational framework), the organizational values and beliefs remain almost the same as the CFA before the reform. This means that the plan of shifting from a governmental agency to an autonomous association is an going process, and that organizational core beliefs and values are more difficult to change than its structural framework. This is due to the inertia of the organizational history and the effect of institutionalization. The change of Chinese Football Association is looked at as a pioneering sport organization in China to undertake the “decoupling” road. It is believed that many other sport organizations, especially sport governing bodies will follow the step of CFA in the near future. Therefore, the experience of CFA change is worthy of studying.

Keywords: Chinese Football Association, Organizational Change, Organizational Culture, Structural Framework

Procedia PDF Downloads 344
354 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading

Authors: Robert Caulk

Abstract:

A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.

Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration

Procedia PDF Downloads 89
353 What Is At Stake When Developing and Using a Rubric to Judge Chemistry Honours Dissertations for Entry into a PhD?

Authors: Moira Cordiner

Abstract:

As a result of an Australian university approving a policy to improve the quality of assessment practices, as an academic developer (AD) with expertise in criterion-referenced assessment commenced in 2008. The four-year appointment was to support 40 'champions' in their Schools. This presentation is based on the experiences of a group of Chemistry academics who worked with the AD to develop and implement an honours dissertation rubric. Honours is a research year following a three-year undergraduate year. If the standard of the student's work is high enough (mainly the dissertation) then the student can commence a PhD. What became clear during the process was that much more was at stake than just the successful development and trial of the rubric, including academics' reputations, university rankings and research outputs. Working with the champion-Head of School(HOS) and the honours coordinator, the AD helped them adapt an honours rubric that she had helped create and trial successfully for another Science discipline. A year of many meetings and complex power plays between the two academics finally resulted in a version that was critiqued by the Chemistry teaching and learning committee. Accompanying the rubric was an explanation of grading rules plus a list of supervisor expectations to explain to students how the rubric was used for grading. Further refinements were made until all staff were satisfied. It was trialled successfully in 2011, then small changes made. It was adapted and implemented for Medicine honours with her help in 2012. Despite coming to consensus about statements of quality in the rubric, a few academics found it challenging matching these to the dissertations and allocating a grade. They had had no time to undertake training to do this, or make overt their implicit criteria and standards, which some admitted they were using - 'I know what a first class is'. Other factors affecting grading included: the small School where all supervisors knew each other and the students, meant that friendships and collegiality were at stake if low grades were given; no external examiners were appointed-all were internal with the potential for bias; supervisors’ reputations were at stake if their students did not receive a good grade; the School's reputation was also at risk if insufficient honours students qualified for PhD entry; and research output was jeopardised without enough honours students to work on supervisors’ projects. A further complication during the study was a restructure of the university and retrenchments, with pressure to increase research output as world rankings assumed greater importance to senior management. In conclusion, much more was at stake than developing a usable rubric. The HOS had to be seen to champion the 'new' assessment practice while balancing institutional demands for increased research output and ensuring as many honours dissertations as possible met high standards, so that eventually the percentage of PhD completions and research output rose. It is therefore in the institution's best interest for this cycle to be maintained as it affects rankings and reputations. In this context, are rubrics redundant?

Keywords: explicit and implicit standards, judging quality, university rankings, research reputations

Procedia PDF Downloads 336
352 The Effectiveness of Using Dramatic Conventions as the Teaching Strategy on Self-Efficacy for Children With Autism Spectrum Disorder

Authors: Tso Sheng-Yang, Wang Tien-Ni

Abstract:

Introduction and Purpose: Previous researchers have documented children with ASD (Autism Spectrum Disorders) prefer to escaping internal privates and external privates when they face tough conditions they can’t control or they don’t like.Especially, when children with ASD need to learn challenging tasks, such us Chinese language, their inappropriate behaviors will occur apparently. Recently, researchers apply positive behavior support strategies for children with ASD to enhance their self-efficacy and therefore to reduce their adverse behaviors. Thus, the purpose of this research was to design a series of lecture based on art therapy and to evaluate its effectiveness on the child’s self-efficacy. Method: This research was the single-case design study that recruited a high school boy with ASD. Whole research can be separated into three conditions. First, baseline condition, before the class started and ended, the researcher collected participant’s competencies of self-efficacy every session. In intervention condition, the research used dramatic conventions to teach the child in Chinese language twice a week.When the data was stable across three documents, the period entered to the maintenance condition. In maintenance condition, the researcher only collected the score of self-efficacynot to do other interventions five times a month to represent the effectiveness of maintenance.The time and frequency of data collection among three conditions are identical. Concerning art therapy, the common approach, e.g., music, drama, or painting is to use art medium as independent variable. Due to visual cues of art medium, the ASD can be easily to gain joint attention with teachers. Besides, the ASD have difficulties in understanding abstract objectives Thus, using the drama convention is helpful for the ASD to construct the environment and understand the context of Classical Chinese. By real operation, it can improve the ASD to understand the context and construct prior knowledge. Result: Bassd on the 10-points Likert scale and research, we product following results. (a) In baseline condition, the average score of self-efficacyis 1.12 points, rangedfrom 1 to 2 points, and the level change is 0 point. (b)In intervention condition, the average score of self-efficacy is 7.66 points rangedfrom 7 to 9 points, and the level change is 1 point. (c)In maintenance condition, the average score of self-efficacy is 6.66 points rangedfrom 6 to 7 points, and the level change is 1 point. Concerning immediacy of change, between baseline and intervention conditions, the difference is 5 points. No overlaps were found between these two conditions. Conclusion: According to the result, we find that it is effective that using dramatic conventions a s teaching strategies to teach children with ASD. The result presents the score of self-efficacyimmediately enhances when the dramatic conventions commences. Thus, we suggest the teacher can use this approach and adjust, based on the student’s trait, to teach the ASD on difficult task.

Keywords: dramatic conventions, autism spectrum disorder, slef-efficacy, teaching strategy

Procedia PDF Downloads 83