Search results for: performance and quality
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21074

Search results for: performance and quality

1004 The Recorded Interaction Task: A Validation Study of a New Observational Tool to Assess Mother-Infant Bonding

Authors: Hannah Edwards, Femke T. A. Buisman-Pijlman, Adrian Esterman, Craig Phillips, Sandra Orgeig, Andrea Gordon

Abstract:

Mother-infant bonding is a term which refers to the early emotional connectedness between a mother and her infant. Strong mother-infant bonding promotes higher quality mother and infant interactions including prolonged breastfeeding, secure attachment and increased sensitive parenting and maternal responsiveness. Strengthening of all such interactions leads to improved social behavior, and emotional and cognitive development throughout childhood, adolescence and adulthood. The positive outcomes observed following strong mother-infant bonding emphasize the need to screen new mothers for disrupted mother-infant bonding, and in turn the need for a robust, valid tool to assess mother-infant bonding. A recent scoping review conducted by the research team identified four tools to assess mother-infant bonding, all of which employed self-rating scales. Thus, whilst these tools demonstrated both adequate validity and reliability, they rely on self-reported information from the mother. As such this may reflect a mother’s perception of bonding with their infant, rather than their actual behavior. Therefore, a new tool to assess mother-infant bonding has been developed. The Recorded Interaction Task (RIT) addresses shortcomings of previous tools by employing observational methods to assess bonding. The RIT focusses on the common interaction between mother and infant of changing a nappy, at the target age of 2-6 months, which is visually recorded and then later assessed. Thirteen maternal and seven infant behaviors are scored on the RIT Observation Scoring Sheet, and a final combined score of mother-infant bonding is determined. The aim of the current study was to assess the content validity and inter-rater reliability of the RIT. A panel of six experts with specialized expertise in bonding and infant behavior were consulted. Experts were provided with the RIT Observation Scoring Sheet, a visual recording of a nappy change interaction, and a feedback form. Experts scored the mother and infant interaction on the RIT Observation Scoring Sheet and completed the feedback form which collected their opinions on the validity of each item on the RIT Observation Scoring Sheet and the RIT as a whole. Twelve of the 20 items on the RIT Observation Scoring Sheet were scored ‘Valid’ by all (n=6) or most (n=5) experts. Two items received a ‘Not valid’ score from one expert. The remainder of the items received a mixture of ‘Valid’ and ‘Potentially Valid’ scores. Few changes were made to the RIT Observation Scoring Sheet following expert feedback, including rewording of items for clarity and the exclusion of an item focusing on behavior deemed not relevant for the target infant age. The overall ICC for single rater absolute agreement was 0.48 (95% CI 0.28 – 0.71). Experts (n=6) ratings were less consistent for infant behavior (ICC 0.27 (-0.01 – 0.82)) compared to mother behavior (ICC 0.55 (0.28 – 0.80)). Whilst previous tools employ self-report methods to assess mother-infant bonding, the RIT utilizes observational methods. The current study highlights adequate content validity and moderate inter-rater reliability of the RIT, supporting its use in future research. A convergent validity study comparing the RIT against an existing tool is currently being undertaken to confirm these results.

Keywords: content validity, inter-rater reliability, mother-infant bonding, observational tool, recorded interaction task

Procedia PDF Downloads 182
1003 Identifying Risk Factors for Readmission Using Decision Tree Analysis

Authors: Sıdıka Kaya, Gülay Sain Güven, Seda Karsavuran, Onur Toka

Abstract:

This study is part of an ongoing research project supported by the Scientific and Technological Research Council of Turkey (TUBITAK) under Project Number 114K404, and participation to this conference was supported by Hacettepe University Scientific Research Coordination Unit under Project Number 10243. Evaluation of hospital readmissions is gaining importance in terms of quality and cost, and is becoming the target of national policies. In Turkey, the topic of hospital readmission is relatively new on agenda and very few studies have been conducted on this topic. The aim of this study was to determine 30-day readmission rates and risk factors for readmission. Whether readmission was planned, related to the prior admission and avoidable or not was also assessed. The study was designed as a ‘prospective cohort study.’ 472 patients hospitalized in internal medicine departments of a university hospital in Turkey between February 1, 2015 and April 30, 2015 were followed up. Analyses were conducted using IBM SPSS Statistics version 22.0 and SPSS Modeler 16.0. Average age of the patients was 56 and 56% of the patients were female. Among these patients 95 were readmitted. Overall readmission rate was calculated as 20% (95/472). However, only 31 readmissions were unplanned. Unplanned readmission rate was 6.5% (31/472). Out of 31 unplanned readmission, 24 was related to the prior admission. Only 6 related readmission was avoidable. To determine risk factors for readmission we constructed Chi-square automatic interaction detector (CHAID) decision tree algorithm. CHAID decision trees are nonparametric procedures that make no assumptions of the underlying data. This algorithm determines how independent variables best combine to predict a binary outcome based on ‘if-then’ logic by portioning each independent variable into mutually exclusive subsets based on homogeneity of the data. Independent variables we included in the analysis were: clinic of the department, occupied beds/total number of beds in the clinic at the time of discharge, age, gender, marital status, educational level, distance to residence (km), number of people living with the patient, any person to help his/her care at home after discharge (yes/no), regular source (physician) of care (yes/no), day of discharge, length of stay, ICU utilization (yes/no), total comorbidity score, means for each 3 dimensions of Readiness for Hospital Discharge Scale (patient’s personal status, patient’s knowledge, and patient’s coping ability) and number of daycare admissions within 30 days of discharge. In the analysis, we included all 95 readmitted patients (46.12%), but only 111 (53.88%) non-readmitted patients, although we had 377 non-readmitted patients, to balance data. The risk factors for readmission were found as total comorbidity score, gender, patient’s coping ability, and patient’s knowledge. The strongest identifying factor for readmission was comorbidity score. If patients’ comorbidity score was higher than 1, the risk for readmission increased. The results of this study needs to be validated by other data–sets with more patients. However, we believe that this study will guide further studies of readmission and CHAID is a useful tool for identifying risk factors for readmission.

Keywords: decision tree, hospital, internal medicine, readmission

Procedia PDF Downloads 258
1002 Investigating the Thermal Comfort Properties of Mohair Fabrics

Authors: Adine Gericke, Jiri Militky, Mohanapriya Venkataraman

Abstract:

Mohair, obtained from the Angora goat, is a luxury fiber and recognized as one of the best quality natural fibers. Expansion of the use of mohair into technical and functional textile products necessitates the need for a better understanding of how the use of mohair in fabrics will impact on its thermo-physiological comfort related properties. Despite its popularity, very little information is available on the quantification of the thermal and moisture management properties of mohair fabrics. This study investigated the effect of fibrous matter composition and fabric structural parameters on conductive and convective heat transfers to attain more information on the thermal comfort properties of mohair fabrics. Dry heat transfer through textiles may involve conduction through the fibrous phase, radiation through fabric interstices and convection of air within the structure. Factors that play a major role in heat transfer by conduction are fabric areal density (g/m2) and derived quantities such as cover factor and porosity. Convective heat transfer through fabrics is found in environmental conditions where there is wind-flow or the object is moving (e.g. running or walking). The thermal comfort properties of mohair fibers were objectively evaluated firstly in comparison with other textile fibers and secondly in a variety of fabric structures. Two sample sets were developed for this purpose, with fibre content, yarn structure and fabric design as main variables. SEM and microscopic images were obtained to closely examine the physical structures of the fibers and fabrics. Thermal comfort properties such as thermal resistance and thermal conductivity, as well as fabric thickness, were measured on the well-known Alambeta test instrument. Clothing insulation (clo) was calculated from the above. The thermal properties of fabrics under heat convection was evaluated using a laboratory model device developed at the Technical University of Liberec (referred to as the TP2-instrument). The effects of the different variables on fabric thermal comfort properties were analyzed statistically using TIBCO Statistica Software. The results showed that fabric structural properties, specifically sample thickness, played a significant role in determining the thermal comfort properties of the fabrics tested. It was found that regarding thermal resistance related to conductive heat flow, the effect of fiber type was not always statistically significant, probably as a result of the amount of trapped air within the fabric structure. The very low thermal conductivity of air, compared to that of the fibers, had a significant influence on the total conductivity and thermal resistance of the samples. This was confirmed by the high correlation of these factors with sample thickness. Regarding convective heat flow, the most important factor influencing the ability of the fabric to allow dry heat to move through the structure, was again fabric thickness. However, it would be wrong to totally disregard the effect of fiber composition on the thermal resistance of textile fabrics. In this study, the samples containing mohair or mohair/wool were consistently thicker than the others even though weaving parameters were kept constant. This can be ascribed to the physical properties of the mohair fibers that renders it exceptionally well towards trapping air among fibers (in a yarn) as well as among yarns (inside a fabric structure). The thicker structures trap more air to provide higher thermal insulation, but also prevent the free flow of air that allow thermal convection.

Keywords: mohair fabrics, convective heat transfer, thermal comfort properties, thermal resistance

Procedia PDF Downloads 147
1001 Bayesian Estimation of Hierarchical Models for Genotypic Differentiation of Arabidopsis thaliana

Authors: Gautier Viaud, Paul-Henry Cournède

Abstract:

Plant growth models have been used extensively for the prediction of the phenotypic performance of plants. However, they remain most often calibrated for a given genotype and therefore do not take into account genotype by environment interactions. One way of achieving such an objective is to consider Bayesian hierarchical models. Three levels can be identified in such models: The first level describes how a given growth model describes the phenotype of the plant as a function of individual parameters, the second level describes how these individual parameters are distributed within a plant population, the third level corresponds to the attribution of priors on population parameters. Thanks to the Bayesian framework, choosing appropriate priors for the population parameters permits to derive analytical expressions for the full conditional distributions of these population parameters. As plant growth models are of a nonlinear nature, individual parameters cannot be sampled explicitly, and a Metropolis step must be performed. This allows for the use of a hybrid Gibbs--Metropolis sampler. A generic approach was devised for the implementation of both general state space models and estimation algorithms within a programming platform. It was designed using the Julia language, which combines an elegant syntax, metaprogramming capabilities and exhibits high efficiency. Results were obtained for Arabidopsis thaliana on both simulated and real data. An organ-scale Greenlab model for the latter is thus presented, where the surface areas of each individual leaf can be simulated. It is assumed that the error made on the measurement of leaf areas is proportional to the leaf area itself; multiplicative normal noises for the observations are therefore used. Real data were obtained via image analysis of zenithal images of Arabidopsis thaliana over a period of 21 days using a two-step segmentation and tracking algorithm which notably takes advantage of the Arabidopsis thaliana phyllotaxy. Since the model formulation is rather flexible, there is no need that the data for a single individual be available at all times, nor that the times at which data is available be the same for all the different individuals. This allows to discard data from image analysis when it is not considered reliable enough, thereby providing low-biased data in large quantity for leaf areas. The proposed model precisely reproduces the dynamics of Arabidopsis thaliana’s growth while accounting for the variability between genotypes. In addition to the estimation of the population parameters, the level of variability is an interesting indicator of the genotypic stability of model parameters. A promising perspective is to test whether some of the latter should be considered as fixed effects.

Keywords: bayesian, genotypic differentiation, hierarchical models, plant growth models

Procedia PDF Downloads 304
1000 The Gender Criteria of Film Criticism: Creating the ‘Big’, Avoiding the Important

Authors: Eleni Karasavvidou

Abstract:

Social and anthropological research, parallel to Gender Studies, highlighted the relationship between social structures and symbolic forms as an important field of interaction and recording of 'social trends.' Since the study of representations can contribute to the understanding of the social functions and power relations, they encompass. This ‘mirage,’ however, has not only to do with the representations themselves but also with the ways they are received and the film or critical narratives that are established as dominant or alternative. Cinema and the criticism of its cultural products are no exception. Even in the rapidly changing media landscape of the 21st century, movies remain an integral and widespread part of popular culture, making films an extremely powerful means of 'legitimizing' or 'delegitimizing' visions of domination and commonsensical gender stereotypes throughout society. And yet it is film criticism, the 'language per se,' that legitimizes, reinforces, rewards and reproduces (or at least ignores) the stereotypical depictions of female roles that remain common in the realm of film images. This creates the need for this issue to have emerged (also) in academic research questioning gender criteria in film reviews as part of the effort for an inclusive art and society. Qualitative content analysis is used to examine female roles in selected Oscar-nominated films against their reviews from leading websites and newspapers. This method was chosen because of the complex nature of the depictions in the films and the narratives they evoke. The films were divided into basic scenes depicting social functions, such as love and work relationships, positions of power and their function, which were analyzed by content analysis, with borrowings from structuralism (Gennette) and the local/universal images of intercultural philology (Wierlacher). In addition to the measurement of the general ‘representation-time’ by gender, other qualitative characteristics were also analyzed, such as: speaking time, sayings or key actions, overall quality of the character's action in relation to the development of the scenario and social representations in general, as well as quantitatively (insufficient number of female lead roles, fewer key supporting roles, relatively few female directors and people in the production chain and how they might affect screen representations. The quantitative analysis in this study was used to complement the qualitative content analysis. Then the focus shifted to the criteria of film criticism and to the rhetorical narratives that exclude or highlight in relation to gender identities and functions. In the criteria and language of film criticism, stereotypes are often reproduced or allegedly overturned within the framework of apolitical "identity politics," which mainly addresses the surface of a self-referential cultural-consumer product without connecting it more deeply with the material and cultural life. One of the prime examples of this failure is the Bechtel Test, which tracks whether female characters speak in a film regardless of whether women's stories are represented or not in the films analyzed. If perceived unbiased male filmmakers still fail to tell truly feminist stories, the same is the case with the criteria of criticism and the related interventions.

Keywords: representations, context analysis, reviews, sexist stereotypes

Procedia PDF Downloads 85
999 Educating through Design: Eco-Architecture as a Form of Public Awareness

Authors: Carmela Cucuzzella, Jean-Pierre Chupin

Abstract:

Eco-architecture today is being assessed and judged increasingly on the basis of its environmental performance and its dedication to urgent stakes of sustainability. Architects have responded to environmental imperatives in novel ways since the 1960s. In the last two decades, however, different forms of eco-architecture practices have emerged that seem to be as dedicated to the issues of sustainability, as to their ability to 'communicate' their ecological features. The hypothesis is that some contemporary eco-architecture has been developing a characteristic 'explanatory discourse', of which it is possible to identify in buildings around the world. Some eco-architecture practices do not simply demonstrate their alignment with pressing ecological issues, rather, these buildings seem to be also driven by the urgent need to explain their ‘greenness’. The design aims specifically to teach visitors of the eco-qualities. These types of architectural practices are referred to in this paper as eco-didactic. The aim of this paper is to identify and assess this distinctive form of environmental architecture practice that aims to teach. These buildings constitute an entirely new form of design practice that places eco-messages squarely in the public realm. These eco-messages appear to have a variety of purposes: (i) to raise awareness of unsustainable quotidian habits, (ii) to become means of behavioral change, (iii) to publicly announce their responsibility through the designed eco-features, or (iv) to engage the patrons of the building into some form of sustainable interaction. To do this, a comprehensive review of Canadian eco-architecture is conducted since 1998. Their potential eco-didactic aspects are analysed through a lens of three vectors: (1) cognitive visitor experience: between the desire to inform and the poetics of form (are parts of the design dedicated to inform the visitors of the environmental aspects?); (2) formal architectural qualities: between the visibility and the invisibility of environmental features (are these eco-features clearly visible by the visitors?); and (3) communicative method for delivering eco-message: this transmission of knowledge is accomplished somewhere between consensus and dissensus as a method for disseminating the eco-message (do visitors question the eco-features or are they accepted by visitors as features that are environmental?). These architectural forms distinguish themselves in their crossing of disciplines, specifically, architecture, environmental design, and art. They also differ from other architectural practices in terms of how they aim to mobilize different publics within various urban landscapes The diversity of such buildings, from how and what they aim to communicate, to the audience they wish to engage, are all key parameters to better understand their means of knowledge transfer. Cases from the major cities across Canada are analysed, aiming to illustrate this increasing worldwide phenomenon.

Keywords: eco-architecture, public awareness, community engagement, didacticism, communication

Procedia PDF Downloads 128
998 A Conceptual Model of the 'Driver – Highly Automated Vehicle' System

Authors: V. A. Dubovsky, V. V. Savchenko, A. A. Baryskevich

Abstract:

The current trend in the automotive industry towards automatic vehicles is creating new challenges related to human factors. This occurs due to the fact that the driver is increasingly relieved of the need to be constantly involved in driving the vehicle, which can negatively impact his/her situation awareness when manual control is required, and decrease driving skills and abilities. These new problems need to be studied in order to provide road safety during the transition towards self-driving vehicles. For this purpose, it is important to develop an appropriate conceptual model of the interaction between the driver and the automated vehicle, which could serve as a theoretical basis for the development of mathematical and simulation models to explore different aspects of driver behaviour in different road situations. Well-known driver behaviour models describe the impact of different stages of the driver's cognitive process on driving performance but do not describe how the driver controls and adjusts his actions. A more complete description of the driver's cognitive process, including the evaluation of the results of his/her actions, will make it possible to more accurately model various aspects of the human factor in different road situations. This paper presents a conceptual model of the 'driver – highly automated vehicle' system based on the P.K. Anokhin's theory of functional systems, which is a theoretical framework for describing internal processes in purposeful living systems based on such notions as goal, desired and actual results of the purposeful activity. A central feature of the proposed model is a dynamic coupling mechanism between the decision-making of a driver to perform a particular action and changes of road conditions due to driver’s actions. This mechanism is based on the stage by stage evaluation of the deviations of the actual values of the driver’s action results parameters from the expected values. The overall functional structure of the highly automated vehicle in the proposed model includes a driver/vehicle/environment state analyzer to coordinate the interaction between driver and vehicle. The proposed conceptual model can be used as a framework to investigate different aspects of human factors in transitions between automated and manual driving for future improvements in driving safety, and for understanding how driver-vehicle interface must be designed for comfort and safety. A major finding of this study is the demonstration that the theory of functional systems is promising and has the potential to describe the interaction of the driver with the vehicle and the environment.

Keywords: automated vehicle, driver behavior, human factors, human-machine system

Procedia PDF Downloads 147
997 The Effectiveness of Prenatal Breastfeeding Education on Breastfeeding Uptake Postpartum: A Systematic Review

Authors: Jennifer Kehinde, Claire O’Donnell, Annmarie Grealish

Abstract:

Introduction: Breastfeeding has been shown to provide numerous health benefits for both infants and mothers. The decision to breastfeed is influenced by physiological, psychological, and emotional factors. However, the importance of equipping mothers with the necessary knowledge for successful breastfeeding practice cannot be ruled out. The decline in global breastfeeding rate can be linked to a lack of adequate breastfeeding education during the prenatal stage. This systematic review examined the effectiveness of prenatal breastfeeding education on breastfeeding uptake postpartum. Method: This review was undertaken and reported in conformity with the Preferred Reporting Items for Systemic Reviews and Meta-Analysis statement (PRISMA) and was registered on the international prospective register for systematic reviews (PROSPERO: CRD42020213853). A PICO analysis (population, intervention, comparison, outcome) was undertaken to inform the choice of keywords in the search strategy to formulate the review question, which was aimed at determining the effectiveness of prenatal breastfeeding educational programs in improving breastfeeding uptake following birth. A systematic search of five databases (Cumulative Index to Nursing and Allied Health Literature, Medline, Psych INFO, and Applied Social Sciences Index and Abstracts) was searched between January 2014 until July 2021 to identify eligible studies. Quality assessment and narrative synthesis were subsequently undertaken. Results: Fourteen studies were included. All 14 studies used different types of breastfeeding programs; eight used a combination of curriculum-based breastfeeding education programs, group prenatal breastfeeding counselling, and one-to-one breastfeeding educational programs, which were all delivered in person; four studies used web-based learning platforms to deliver breastfeeding education prenatally which were both delivered online and face to face over a period of 3 weeks to 2 months with follow-up periods ranging from 3 weeks to 6 months; one study delivered breastfeeding educational intervention using mother-to-mother breastfeeding support groups in promoting exclusive breastfeeding, and one study disseminated breastfeeding education to participants based on the theory of planned behaviour. The most effective interventions were those that included both theory and hands-on demonstrations. Results showed an increase in breastfeeding uptake, breastfeeding knowledge, an increase in a positive attitude to breastfeeding, and an increase in maternal breastfeeding self-efficacy among mothers who participated in breastfeeding educational programs during prenatal care. Conclusion: Prenatal breastfeeding education increases women’s knowledge of breastfeeding. Mothers who are knowledgeable about breastfeeding and hold a positive approach towards breastfeeding have the tendency to initiate breastfeeding and continue for a lengthened period. Findings demonstrate a general correlation between prenatal breastfeeding education and increased breastfeeding uptake postpartum. The high level of positive breastfeeding outcomes inherent in all the studies can be attributed to prenatal breastfeeding education. This review provides rigorous contemporary evidence that healthcare professionals and policymakers can apply when developing effective strategies to improve breastfeeding rates and ultimately improve the health outcomes of mothers and infants.

Keywords: breastfeeding, breastfeeding programs, breastfeeding self-efficacy, prenatal breastfeeding education

Procedia PDF Downloads 85
996 Comparative Study on Fire Safety Evaluation Methods for External Cladding Systems: ISO 13785-2 and BS 8414

Authors: Kyungsuk Cho, H. Y. Kim, S. U. Chae, J. H. Choi

Abstract:

Technological development has led to the construction of super-tall buildings and insulators are increasingly used as exterior finishing materials to save energy. However, insulators are usually combustible and vulnerable to fire. Fires like that at Wooshin Golden Suite Building in Busan, Korea in 2010 and that at CCTV Building in Beijing, China are the major examples of fire spread accelerated by combustible insulators. The exterior finishing materials of a high-rise building are not made of insulators only, but they are integrated with the building’s external cladding system. There is a limit in evaluating the fire safety of a cladding system with a single small-unit material such as a cone calorimeter. Therefore, countries provide codes to evaluate the fire safety of exterior finishing materials using full-scale tests. This study compares and to examine the applicability of the methods to Korea. Standard analysis showed differences in the type and size of fire sources and duration and exterior finishing materials also differed in size. In order to confirm the differences, fire tests were conducted on identical external cladding systems to compare fire safety. Although the exterior finishing materials were identical, varying degrees of fire spread were observed, which could be considered as differences in the type and size of the fire sources and duration. Therefore, it is deduced that extended studies should be conducted before the evaluation methods and standards are employed in Korea. The two standards for evaluating fire safety provided different results. Peak heat release rate was 5.5MW in ISO method and 3.0±0.5MW in BS method. Peak heat release rate in ISO method continued for 15 minutes. Fire ignition, growth, full development and decay evolved for 30 minutes in BS method where wood cribs were used as fire sources. Therefore, follow-up studies should be conducted to determine which of the two standards provides fire sources that approximate the size of flames coming out from the openings or those spreading to the outside when a fire occurs at a high-rise building.

Keywords: external cladding systems, fire safety evaluation, ISO 13785-2, BS 8414

Procedia PDF Downloads 242
995 Ultrasound Disintegration as a Potential Method for the Pre-Treatment of Virginia Fanpetals (Sida hermaphrodita) Biomass before Methane Fermentation Process

Authors: Marcin Dębowski, Marcin Zieliński, Mirosław Krzemieniewski

Abstract:

As methane fermentation is a complex series of successive biochemical transformations, its subsequent stages are determined, to a various extent, by physical and chemical factors. A specific state of equilibrium is being settled in the functioning fermentation system between environmental conditions and the rate of biochemical reactions and products of successive transformations. In the case of physical factors that influence the effectiveness of methane fermentation transformations, the key significance is ascribed to temperature and intensity of biomass agitation. Among the chemical factors, significant are pH value, type, and availability of the culture medium (to put it simply: the C/N ratio) as well as the presence of toxic substances. One of the important elements which influence the effectiveness of methane fermentation is the pre-treatment of organic substrates and the mode in which the organic matter is made available to anaerobes. Out of all known and described methods for organic substrate pre-treatment before methane fermentation process, the ultrasound disintegration is one of the most interesting technologies. Investigations undertaken on the ultrasound field and the use of installations operating on the existing systems result principally from very wide and universal technological possibilities offered by the sonication process. This physical factor may induce deep physicochemical changes in ultrasonicated substrates that are highly beneficial from the viewpoint of methane fermentation processes. In this case, special role is ascribed to disintegration of biomass that is further subjected to methane fermentation. Once cell walls are damaged, cytoplasm and cellular enzymes are released. The released substances – either in dissolved or colloidal form – are immediately available to anaerobic bacteria for biodegradation. To ensure the maximal release of organic matter from dead biomass cells, disintegration processes are aimed to achieve particle size below 50 μm. It has been demonstrated in many research works and in systems operating in the technical scale that immediately after substrate supersonication the content of organic matter (characterized by COD, BOD5 and TOC indices) was increasing in the dissolved phase of sedimentation water. This phenomenon points to the immediate sonolysis of solid substances contained in the biomass and to the release of cell material, and consequently to the intensification of the hydrolytic phase of fermentation. It results in a significant reduction of fermentation time and increased effectiveness of production of gaseous metabolites of anaerobic bacteria. Because disintegration of Virginia fanpetals biomass via ultrasounds applied in order to intensify its conversion is a novel technique, it is often underestimated by exploiters of agri-biogas works. It has, however, many advantages that have a direct impact on its technological and economical superiority over thus far applied methods of biomass conversion. As for now, ultrasound disintegrators for biomass conversion are not produced on the mass-scale, but by specialized groups in scientific or R&D centers. Therefore, their quality and effectiveness are to a large extent determined by their manufacturers’ knowledge and skills in the fields of acoustics and electronic engineering.

Keywords: ultrasound disintegration, biomass, methane fermentation, biogas, Virginia fanpetals

Procedia PDF Downloads 369
994 PolyScan: Comprehending Human Polymicrobial Infections for Vector-Borne Disease Diagnostic Purposes

Authors: Kunal Garg, Louise Theusen Hermansan, Kanoktip Puttaraska, Oliver Hendricks, Heidi Pirttinen, Leona Gilbert

Abstract:

The Germ Theory (one infectious determinant is equal to one disease) has unarguably evolved our capability to diagnose and treat infectious diseases over the years. Nevertheless, the advent of technology, climate change, and volatile human behavior has brought about drastic changes in our environment, leading us to question the relevance of the Germ Theory in our day, i.e. will vector-borne disease (VBD) sufferers produce multiple immune responses when tested for multiple microbes? Vector diseased patients producing multiple immune responses to different microbes would evidently suggest human polymicrobial infections (HPI). Ongoing diagnostic tools are exceedingly unequipped with the current research findings that would aid in diagnosing patients for polymicrobial infections. This shortcoming has caused misdiagnosis at very high rates, consequently diminishing the patient’s quality of life due to inadequate treatment. Equipped with the state-of-art scientific knowledge, PolyScan intends to address the pitfalls in current VBD diagnostics. PolyScan is a multiplex and multifunctional enzyme linked Immunosorbent assay (ELISA) platform that can test for numerous VBD microbes and allow simultaneous screening for multiple types of antibodies. To validate PolyScan, Lyme Borreliosis (LB) and spondyloarthritis (SpA) patient groups (n = 54 each) were tested for Borrelia burgdorferi, Borrelia burgdorferi Round Body (RB), Borrelia afzelii, Borrelia garinii, and Ehrlichia chaffeensis against IgM and IgG antibodies. LB serum samples were obtained from Germany and SpA serum samples were obtained from Denmark under relevant ethical approvals. The SpA group represented chronic LB stage because reactive arthritis (SpA subtype) in the form of Lyme arthritis links to LB. It was hypothesized that patients from both the groups will produce multiple immune responses that as a consequence would evidently suggest HPI. It was also hypothesized that the multiple immune response proportion in SpA patient group would be significantly larger when compared to the LB patient group across both antibodies. It was observed that 26% LB patients and 57% SpA patients produced multiple immune responses in contrast to 33% LB patients and 30% SpA patients that produced solitary immune responses when tested against IgM. Similarly, 52% LB patients and an astounding 73% SpA patients produced multiple immune responses in contrast to 30% LB patients and 8% SpA patients that produced solitary immune responses when tested against IgG. Interestingly, IgM immune dysfunction in both the patient groups was also recorded. Atypically, 6% of the unresponsive 18% LB with IgG antibody was recorded producing multiple immune responses with the IgM antibody. Similarly, 12% of the unresponsive 19% SpA with IgG antibody was recorded producing multiple immune responses with the IgM antibody. Thus, results not only supported hypothesis but also suggested that IgM may atypically prevail longer than IgG. The PolyScan concept will aid clinicians to detect patients for early, persistent, late, polymicrobial, & immune dysfunction conditions linked to different VBD. PolyScan provides a paradigm shift for the VBD diagnostic industry to follow that will drastically shorten patient’s time to receive adequate treatment.

Keywords: diagnostics, immune dysfunction, polymicrobial, TICK-TAG

Procedia PDF Downloads 334
993 A Novel Approach to 3D Thrust Vectoring CFD via Mesh Morphing

Authors: Umut Yıldız, Berkin Kurtuluş, Yunus Emre Muslubaş

Abstract:

Thrust vectoring, especially in military aviation, is a concept that sees much use to improve maneuverability in already agile aircraft. As this concept is fairly new and cost intensive to design and test, computational methods are useful in easing the preliminary design process. Computational Fluid Dynamics (CFD) can be utilized in many forms to simulate nozzle flow, and there exist various CFD studies in both 2D mechanical and 3D injection based thrust vectoring, and yet, 3D mechanical thrust vectoring analyses, at this point in time, are lacking variety. Additionally, the freely available test data is constrained to limited pitch angles and geometries. In this study, based on a test case provided by NASA, both steady and unsteady 3D CFD simulations are conducted to examine the aerodynamic performance of a mechanical thrust vectoring nozzle model and to validate the utilized numerical model. Steady analyses are performed to verify the flow characteristics of the nozzle at pitch angles of 0, 10 and 20 degrees, and the results are compared with experimental data. It is observed that the pressure data obtained on the inner surface of the nozzle at each specified pitch angle and under different flow conditions with pressure ratios of 1.5, 2 and 4, as well as at azimuthal angle of 0, 45, 90, 135, and 180 degrees exhibited a high level of agreement with the corresponding experimental results. To validate the CFD model, the insights from the steady analyses are utilized, followed by unsteady analyses covering a wide range of pitch angles from 0 to 20 degrees. Throughout the simulations, a mesh morphing method using a carefully calculated mathematical shape deformation model that simulates the vectored nozzle shape exactly at each point of its travel is employed to dynamically alter the divergent part of the nozzle over time within this pitch angle range. The mesh morphing based vectored nozzle shapes were compared with the drawings provided by NASA, ensuring a complete match was achieved. This computational approach allowed for the creation of a comprehensive database of results without the need to generate separate solution domains. The database contains results at every 0.01° increment of nozzle pitch angle. The unsteady analyses, generated using the morphing method, are found to be in excellent agreement with experimental data, further confirming the accuracy of the CFD model.

Keywords: thrust vectoring, computational fluid dynamics, 3d mesh morphing, mathematical shape deformation model

Procedia PDF Downloads 85
992 Modeling and Optimizing of Sinker Electric Discharge Machine Process Parameters on AISI 4140 Alloy Steel by Central Composite Rotatable Design Method

Authors: J. Satya Eswari, J. Sekhar Babub, Meena Murmu, Govardhan Bhat

Abstract:

Electrical Discharge Machining (EDM) is an unconventional manufacturing process based on removal of material from a part by means of a series of repeated electrical sparks created by electric pulse generators at short intervals between a electrode tool and the part to be machined emmersed in dielectric fluid. In this paper, a study will be performed on the influence of the factors of peak current, pulse on time, interval time and power supply voltage. The output responses measured were material removal rate (MRR) and surface roughness. Finally, the parameters were optimized for maximum MRR with the desired surface roughness. RSM involves establishing mathematical relations between the design variables and the resulting responses and optimizing the process conditions. RSM is not free from problems when it is applied to multi-factor and multi-response situations. Design of experiments (DOE) technique to select the optimum machining conditions for machining AISI 4140 using EDM. The purpose of this paper is to determine the optimal factors of the electro-discharge machining (EDM) process investigate feasibility of design of experiment techniques. The work pieces used were rectangular plates of AISI 4140 grade steel alloy. The study of optimized settings of key machining factors like pulse on time, gap voltage, flushing pressure, input current and duty cycle on the material removal, surface roughness is been carried out using central composite design. The objective is to maximize the Material removal rate (MRR). Central composite design data is used to develop second order polynomial models with interaction terms. The insignificant coefficients’ are eliminated with these models by using student t test and F test for the goodness of fit. CCD is first used to establish the determine the optimal factors of the electro-discharge machining (EDM) for maximizing the MRR. The responses are further treated through a objective function to establish the same set of key machining factors to satisfy the optimization problem of the electro-discharge machining (EDM) process. The results demonstrate the better performance of CCD data based RSM for optimizing the electro-discharge machining (EDM) process.

Keywords: electric discharge machining (EDM), modeling, optimization, CCRD

Procedia PDF Downloads 343
991 Modeling of Hot Casting Technology of Beryllium Oxide Ceramics with Ultrasonic Activation

Authors: Zamira Sattinova, Tassybek Bekenov

Abstract:

The article is devoted to modeling the technology of hot casting of beryllium oxide ceramics. The stages of ultrasonic activation of beryllium oxide slurry in the plant vessel to improve the rheological property, hot casting in the moulding cavity with cooling and solidification of the casting are described. Thermoplastic slurry (hereinafter referred to as slurry) shows the rheology of a non-Newtonian fluid with yield and plastic viscosity. Cooling-solidification of the slurry in the forming cavity occurs in the liquid, taking into account crystallization and solid state. In this work is the method of calculation of hot casting of the slurry using the method of effective molecular viscosity of viscoplastic fluid. It is shown that the slurry near the cooled wall is in a state of crystallization and plasticity, and the rest may still be in the liquid phase. Nonuniform distribution of temperature, density and concentration of kinetically free binder takes place along the cavity section. This leads to compensation of shrinkage by the influx of slurry from the liquid into the crystallization zones and plasticity of the castings. In the plasticity zone, the shrinkage determined by the concentration of kinetically free binder is compensated under the action of the pressure gradient. The solidification mechanism, as well as the mechanical behavior of the casting mass during casting, the rheological and thermophysical properties of the thermoplastic BeO slurry due to ultrasound exposure have not been well studied. Nevertheless, experimental data allow us to conclude that the effect of ultrasonic vibrations on the slurry mass leads to it: a change in structure, an increase in technological properties, a decrease in heterogeneity and a change in rheological properties. In the course of experiments, the effect of ultrasonic treatment and its duration on the change in viscosity and ultimate shear stress of the slurry depending on temperature (55-75℃) and the mass fraction of the binder (10 - 11.7%) have been studied. At the same time, changes in these properties before and after ultrasound exposure have been analyzed, as well as the nature of the flow in the system under study. The experience of operating the unit with ultrasonic impact has shown that at the same time, the casting capacity of the slurry increases by an average of 15%, and the viscosity decreases by more than half. Experimental study of physicochemical properties and phase change with simultaneous consideration of all factors affecting the quality of products in the process of continuous casting is labor-intensive. Therefore, an effective way to control the physical processes occurring in the formation of articles with predetermined properties and shapes is to simulate the process and determine its basic characteristics. The results of the calculations show the whole stage of hot casting of beryllium oxide slurry, taking into account the change in its state of aggregation. Ultrasonic treatment improves rheological properties and increases the fluidity of the slurry in the forming cavity. Calculations show the influence of velocity, temperature factors and structural data of the cavity on the cooling-solidification process of the casting. In the calculations, conditions for molding with shrinkage of the slurry by hot casting have been found, which makes it possible to obtain a solidifying product with a uniform beryllium oxide structure at the outlet of the cavity.

Keywords: hot casting, thermoplastic slurry molding, shrinkage, beryllium oxide

Procedia PDF Downloads 30
990 Invasive Asian Carp Fish Species: A Natural and Sustainable Source of Methionine for Organic Poultry Production

Authors: Komala Arsi, Ann M. Donoghue, Dan J. Donoghue

Abstract:

Methionine is an essential dietary amino acid necessary to promote growth and health of poultry. Synthetic methionine is commonly used as a supplement in conventional poultry diets and is temporarily allowed in organic poultry feed for lack of natural and organically approved sources of methionine. It has been a challenge to find a natural, sustainable and cost-effective source for methionine which reiterates the pressing need to explore potential alternatives of methionine for organic poultry production. Fish have high concentrations of methionine, but wild-caught fish are expensive and adversely impact wild fish populations. Asian carp (AC) is an invasive species and its utilization has the potential to be used as a natural methionine source. However, to our best knowledge, there is no proven technology to utilize this fish as a methionine source. In this study, we co-extruded Asian carp and soybean meal to form a dry-extruded, methionine-rich AC meal. In order to formulate rations with the novel extruded carp meal, the product was tested on cecectomized roosters for its amino acid digestibility and total metabolizable energy (TMEn). Excreta was collected and the gross energy, protein content of the feces was determined to calculate Total Metabolizable Energy (TME). The methionine content, digestibility and TME values were greater for the extruded AC meal than control diets. Carp meal was subsequently tested as a methionine source in feeds formulated for broilers, and production performance (body weight gain and feed conversion ratio) was assessed in comparison with broilers fed standard commercial diets supplemented with synthetic methionine. In this study, broiler chickens were fed either a control diet with synthetic methionine or a treatment diet with extruded AC meal (8 replicates/treatment; n=30 birds/replicate) from day 1 to 42 days of age. At the end of the trial, data for body weights, feed intake and feed conversion ratio (FCR) was analyzed using one-way ANOVA with Fisher LSD test for multiple comparisons. Results revealed that birds on AC diet had body weight gains and feed intake comparable to diets containing synthetic methionine (P > 0.05). Results from the study suggest that invasive AC-derived fish meal could potentially be an effective and inexpensive source of sustainable natural methionine for organic poultry farmers.

Keywords: Asian carp, methionine, organic, poultry

Procedia PDF Downloads 158
989 Bidirectional Pendulum Vibration Absorbers with Homogeneous Variable Tangential Friction: Modelling and Design

Authors: Emiliano Matta

Abstract:

Passive resonant vibration absorbers are among the most widely used dynamic control systems in civil engineering. They typically consist in a single-degree-of-freedom mechanical appendage of the main structure, tuned to one structural target mode through frequency and damping optimization. One classical scheme is the pendulum absorber, whose mass is constrained to move along a curved trajectory and is damped by viscous dashpots. Even though the principle is well known, the search for improved arrangements is still under way. In recent years this investigation inspired a type of bidirectional pendulum absorber (BPA), consisting of a mass constrained to move along an optimal three-dimensional (3D) concave surface. For such a BPA, the surface principal curvatures are designed to ensure a bidirectional tuning of the absorber to both principal modes of the main structure, while damping is produced either by horizontal viscous dashpots or by vertical friction dashpots, connecting the BPA to the main structure. In this paper, a variant of BPA is proposed, where damping originates from the variable tangential friction force which develops between the pendulum mass and the 3D surface as a result of a spatially-varying friction coefficient pattern. Namely, a friction coefficient is proposed that varies along the pendulum surface in proportion to the modulus of the 3D surface gradient. With such an assumption, the dissipative model of the absorber can be proven to be nonlinear homogeneous in the small displacement domain. The resulting homogeneous BPA (HBPA) has a fundamental advantage over conventional friction-type absorbers, because its equivalent damping ratio results independent on the amplitude of oscillations, and therefore its optimal performance does not depend on the excitation level. On the other hand, the HBPA is more compact than viscously damped BPAs because it does not need the installation of dampers. This paper presents the analytical model of the HBPA and an optimal methodology for its design. Numerical simulations of single- and multi-story building structures under wind and earthquake loads are presented to compare the HBPA with classical viscously damped BPAs. It is shown that the HBPA is a promising alternative to existing BPA types and that homogeneous tangential friction is an effective means to realize systems provided with amplitude-independent damping.

Keywords: amplitude-independent damping, homogeneous friction, pendulum nonlinear dynamics, structural control, vibration resonant absorbers

Procedia PDF Downloads 149
988 Measuring Digital Literacy in the Chilean Workforce

Authors: Carolina Busco, Daniela Osses

Abstract:

The development of digital literacy has become a fundamental element that allows for citizen inclusion, access to quality jobs, and a labor market capable of responding to the digital economy. There are no methodological instruments available in Chile to measure the workforce’s digital literacy and improve national policies on this matter. Thus, the objective of this research is to develop a survey to measure digital literacy in a sample of 200 Chilean workers. Dimensions considered in the instrument are sociodemographics, access to infrastructure, digital education, digital skills, and the ability to use e-government services. To achieve the research objective of developing a digital literacy model of indicators and a research instrument for this purpose, along with an exploratory analysis of data using factor analysis, we used an empirical, quantitative-qualitative, exploratory, non-probabilistic, and cross-sectional research design. The research instrument is a survey created to measure variables that make up the conceptual map prepared from the bibliographic review. Before applying the survey, a pilot test was implemented, resulting in several adjustments to the phrasing of some items. A validation test was also applied using six experts, including their observations on the final instrument. The survey contained 49 items that were further divided into three sets of questions: sociodemographic data; a Likert scale of four values ranked according to the level of agreement; iii) multiple choice questions complementing the dimensions. Data collection occurred between January and March 2022. For the factor analysis, we used the answers to 12 items with the Likert scale. KMO showed a value of 0.626, indicating a medium level of correlation, whereas Bartlett’s test yielded a significance value of less than 0.05 and a Cronbach’s Alpha of 0.618. Taking all factor selection criteria into account, we decided to include and analyze four factors that together explain 53.48% of the accumulated variance. We identified the following factors: i) access to infrastructure and opportunities to develop digital skills at the workplace or educational establishment (15.57%), ii) ability to solve everyday problems using digital tools (14.89%), iii) online tools used to stay connected with others (11.94%), and iv) residential Internet access and speed (11%). Quantitative results were discussed within six focus groups using heterogenic selection criteria related to the most relevant variables identified in the statistical analysis: upper-class school students; middle-class university students; Ph.D. professors; low-income working women, elderly individuals, and a group of rural workers. The digital divide and its social and economic correlations are evident in the results of this research. In Chile, the items that explain the acquisition of digital tools focus on access to infrastructure, which ultimately puts the first filter on the development of digital skills. Therefore, as expressed in the literature review, the advance of these skills is radically different when sociodemographic variables are considered. This increases socioeconomic distances and exclusion criteria, putting those who do not have these skills at a disadvantage and forcing them to seek the assistance of others.

Keywords: digital literacy, digital society, workforce digitalization, digital skills

Procedia PDF Downloads 67
987 Towards a Better Understanding of Planning for Urban Intensification: Case Study of Auckland, New Zealand

Authors: Wen Liu, Errol Haarhoff, Lee Beattie

Abstract:

In 2010, New Zealand’s central government re-organise the local governments arrangements in Auckland, New Zealand by amalgamating its previous regional council and seven supporting local government units into a single unitary council, the Auckland Council. The Auckland Council is charged with providing local government services to approximately 1.5 million people (a third of New Zealand’s total population). This includes addressing Auckland’s strategic urban growth management and setting its urban planning policy directions for the next 40 years. This is expressed in the first ever spatial plan in the region – the Auckland Plan (2012). The Auckland plan supports implementing a compact city model by concentrating the larger part of future urban growth and development in, and around, existing and proposed transit centres, with the intention of Auckland to become globally competitive city and achieving ‘the most liveable city in the world’. Turning that vision into reality is operatized through the statutory land use plan, the Auckland Unitary Plan. The Unitary plan replaced the previous regional and local statutory plans when it became operative in 2016, becoming the ‘rule book’ on how to manage and develop the natural and built environment, using land use zones and zone standards. Common to the broad range of literature on urban growth management, one significant issue stands out about intensification. The ‘gap’ between strategic planning and what has been achieved is evident in the argument for the ‘compact’ urban form. Although the compact city model may have a wide range of merits, the extent to which these are actualized largely rely on how intensification actually is delivered. The transformation of the rhetoric of the residential intensification model into reality is of profound influence, yet has enjoyed limited empirical analysis. In Auckland, the establishment of the Auckland Plan set up the strategies to deliver intensification into diversified arenas. Nonetheless, planning policy itself does not necessarily achieve the envisaged objectives, delivering the planning system and high capacity to enhance and sustain plan implementation is another demanding agenda. Though the Auckland Plan provides a wide ranging strategic context, its actual delivery is beholden on the Unitary Plan. However, questions have been asked if the Unitary Plan has the necessary statutory tools to deliver the Auckland Plan’s policy outcomes. In Auckland, there is likely to be continuing tension between the strategies for intensification and their envisaged objectives, and made it doubtful whether the main principles of the intensification strategies could be realized. This raises questions over whether the Auckland Plan’s policy goals can be achieved in practice, including delivering ‘quality compact city’ and residential intensification. Taking Auckland as an example of traditionally sprawl cities, this article intends to investigate the efficacy plan making and implementation directed towards higher density development. This article explores the process of plan development, plan making and implementation frameworks of the first ever spatial plan in Auckland, so as to explicate the objectives and processes involved, and consider whether this will facilitate decision making processes to realize the anticipated intensive urban development.

Keywords: urban intensification, sustainable development, plan making, governance and implementation

Procedia PDF Downloads 557
986 Strategies for Urban-Architectural Design for the Sustainable Recovery of the Huayla Stuary in Puerto Bolivar, Machala-Ecuador

Authors: Soledad Coronel Poma, Lorena Alvarado Rodriguez

Abstract:

The purpose of this project is to design public space through urban-architectural strategies that help to the sustainable recovery of the Huayla estuary and the revival of tourism in this area. This design considers other sustainable and architectural ideas used in similar cases, along with national and international regulations for saving shorelines in danger. To understand the situation of this location, Puerto Bolivar is the main port of the Province of El Oro and of the south of the country, where 90,000 national and foreign tourists pass through all year round. For that reason, a physical-urban, social, and environmental analysis of the area was carried out through surveys and conversations with the community. This analysis showed that around 70% of people feel unsatisfied and concerned about the estuary and its surroundings. Crime, absence of green areas, bad conservation of shorelines, lack of tourists, poor commercial infrastructure, and the spread of informal commerce are the main issues to be solved. As an intervention project whose main goal is that residents and tourists have contact with native nature and enjoy doing local activities, three main strategies: mobility, ecology, and urban –architectural are proposed to recover the estuary and its surroundings. First of all, the design of this public space is based on turning the estuary location into a linear promenade that could be seen as a tourist corridor, which would help to reduce pollution, increase green spaces and improve tourism. Another strategy aims to improve the economy of the community through some local activities like fishing and sailing and the commerce of fresh seafood, both raw products and in restaurants. Furthermore, in support of the environmental approach, some houses are rebuilt as sustainable houses using local materials and rearranged into blocks closer to the commercial area. Finally, the planning incorporates the use of many plants such as palms, sameness trees, and mangroves around the area to encourage people to get in touch with nature. The results of designing this space showed an increase in the green area per inhabitant index. It went from 1.69 m²/room to 10.48 m²/room, with 12 096 m² of green corridors and the incorporation of 5000 m² of mangroves at the shoreline. Additionally, living zones also increased with the creation of green areas taking advantage of the existing nature and implementing restaurants and recreational spaces. Moreover, the relocation of houses and buildings helped to free estuary's shoreline, so people are now in more comfortable places closer to their workplaces. Finally, dock spaces are increased, reaching the capacity of the boats and canoes, helping to organize the area in the estuary. To sum up, this project searches the improvement of the estuary environment with its shoreline and surroundings that include the vegetation, infrastructure and people with their local activities, achieving a better quality of life, attraction of tourism, reduction of pollution and finally getting a full recovered estuary as a natural ecosystem.

Keywords: recover, public space, stuary, sustainable

Procedia PDF Downloads 149
985 Changing Employment Relations Practices in Hong Kong: Cases of Two Multinational Retail Banks since 1997

Authors: Teresa Shuk-Ching Poon

Abstract:

This paper sets out to examine the changing employment relations practices in Hong Kong’s retail banking sector over a period of more than 10 years. The major objective of the research is to examine whether and to what extent local institutional influences have overshadowed global market forces in shaping strategic management decisions and employment relations practices in Hong Kong, with a view to drawing implications to comparative employment relations studies. Examining the changing pattern of employment relations, this paper finds the industrial relations strategic choice model (Kochan, McKersie and Cappelli, 1984) appropriate to use as a framework for the study. Four broad aspects of employment relations are examined, including work organisation and job design; staffing and labour adjustment; performance appraisal, compensation and employee development; and labour unions and employment relations. Changes in the employment relations practices in two multinational retail banks operated in Hong Kong are examined in detail. The retail banking sector in Hong Kong is chosen as a case to examine as it is a highly competitive segment in the financial service industry very much susceptible to global market influences. This is well illustrated by the fact that Hong Kong was hit hard by both the Asian and the Global Financial Crises. This sector is also subject to increasing institutional influences, especially after the return of Hong Kong’s sovereignty to the People’s Republic of China (PRC) since 1997. The case study method is used as it is a suitable research design able to capture the complex institutional and environmental context which is the subject-matter to be examined in the paper. The paper concludes that operation of the retail banks in Hong Kong has been subject to both institutional and global market changes at different points in time. Information obtained from the two cases examined tends to support the conclusion that the relative significance of institutional as against global market factors in influencing retail banks’ operation and their employment relations practices is depended very much on the time in which these influences emerged and the scale and intensity of these influences. This case study highlights the importance of placing comparative employment relations studies within a context where employment relations practices in different countries or different regions/cities within the same country could be examined and compared over a longer period of time to make the comparison more meaningful.

Keywords: employment relations, institutional influences, global market forces, strategic management decisions, retail banks, Hong Kong

Procedia PDF Downloads 402
984 Correlation of Hyperlipidemia with Platelet Parameters in Blood Donors

Authors: S. Nishat Fatima Rizvi, Tulika Chandra, Abbas Ali Mahdi, Devisha Agarwal

Abstract:

Introduction: Blood components are an unexplored area prone to numerous discoveries which influence patient’s care. Experiments at different levels will further change the present concept of blood banking. Hyperlipidemia is a condition of elevated plasma level of low-density lipoprotein (LDL) as well as decreased plasma level of high-density lipoprotein (HDL). Studies show that platelets play a vital role in the progression of atherosclerosis and thrombosis, a major cause of death worldwide. They are activated by many triggers like elevated LDL in the blood resulting in aggregation and formation of plaques. Hyperlipidemic platelets are frequently transfused to patients with various disorders. Screening the random donor platelets for hyperlipidemia and correlating the condition with other donor criteria such as lipid rich diet, oral contraceptive pills intake, weight, alcohol intake, smoking, sedentary lifestyle, family history of heart diseases will lead to further deciding the exclusion criteria for donor selection. This will help in making the patients safe as well as the donor deferral criteria more stringent to improve the quality of blood supply. Technical evaluation and assessment will enable blood bankers to supply safe blood and improve the guidelines for blood safety. Thus, we try to study the correlation between hyperlipidemic platelets with platelets parameters, weight, and specific history of the donors. Methodology: This case control study included 100 blood samples of Blood donors, out of 100 only 30 samples were found to be hyperlipidemic and were included as cases, while rest were taken as controls. Lipid Profile were measured by fully automated analyzer (TRIGL:triglycerides),(LDL-C:LDL –Cholesterol plus 2nd generation),CHOL 2: Cholesterol Gen 2), HDL C 3: HDL-Cholesterol plus 3rdgeneration)-(Cobas C311-Roche Diagnostic).And Platelets parameters were analyzed by the Sysmex KX21 automated hematology analyzer. Results: A significant correlation was found amongst hyperlipidemic level in single time donor. In which 80% donors have history of heart disease, 66.66% donors have sedentary life style, 83.3% donors were smokers, 50% donors were alcoholic, and 63.33% donors had taken lipid rich diet. Active physical activity was found amongst 40% donors. We divided donors sample in two groups based on their body weight. In group 1, hyperlipidemic samples: Platelet Parameters were 75% in normal 25% abnormal in >70Kg weight while in 50-70Kg weight 90% were normal 10% were abnormal. In-group 2, Non Hyperlipidemic samples: platelet Parameters were 95% normal and 5% abnormal in >70Kg weight, while in 50-70Kg Weight, 66.66% normal and 33.33% abnormal. Conclusion: The findings indicate that Hyperlipidemic status of donors may affect the platelet parameters and can be distinguished on history by their weight, Smoking, Alcoholic intake, Sedentary lifestyle, Active physical activity, Lipid rich diet, Oral contraceptive pills intake, and Family history of heart disease. However further studies on a large sample size will affirm this finding.

Keywords: blood donors, hyperlipidemia, platelet, weight

Procedia PDF Downloads 315
983 Endometrial Ablation and Resection Versus Hysterectomy for Heavy Menstrual Bleeding: A Systematic Review and Meta-Analysis of Effectiveness and Complications

Authors: Iliana Georganta, Clare Deehan, Marysia Thomson, Miriam McDonald, Kerrie McNulty, Anna Strachan, Elizabeth Anderson, Alyaa Mostafa

Abstract:

Context: A meta-analysis of randomized controlled trials (RCTs) comparing hysterectomy versus endometrial ablation and resection in the management of heavy menstrual bleeding. Objective: To evaluate the clinical efficacy, satisfaction rates and adverse events of hysterectomy compared to more minimally invasive techniques in the treatment of HMB. Evidence Acquisition: A literature search was performed for all RCTs and quasi-RCTs comparing hysterectomy with either endometrial ablation endometrial resection of both. The search had no language restrictions and was last updated in June 2020 using MEDLINE, EMBASE, Cochrane Central Register of Clinical Trials, PubMed, Google Scholar, PsycINFO, Clinicaltrials.gov and Clinical trials. EU. In addition, a manual search of the abstract databases of the European Haemophilia Conference on women's health was performed and further studies were identified from references of acquired papers. The primary outcomes were patient-reported and objective reduction in heavy menstrual bleeding up to 2 years and after 2 years. Secondary outcomes included satisfaction rates, pain, adverse events short and long term, quality of life and sexual function, further surgery, duration of surgery and hospital stay and time to return to work and normal activities. Data were analysed using RevMan software. Evidence synthesis: 12 studies and a total of 2028 women were included (hysterectomy: n = 977 women vs endometrial ablation or resection: n = 1051 women). Hysterectomy was compared with endometrial ablation only in five studies (Lin, Dickersin, Sesti, Jain, Cooper) and endometrial resection only in five studies (Gannon, Schulpher, O’Connor, Crosignani, Zupi) and a mixture of the Ablation and Resection in two studies (Elmantwe, Pinion). Of the 1² studies, 10 reported women’s perception of bleeding symptoms as improved. Meta-analysis showed that women in the hysterectomy group were more likely to show improvement in bleeding symptoms when compared with endometrial ablation or resection up to 2-year follow-up (RR 0.75, 95% CI 0.71 to 0.79, I² = 95%). Objective outcomes of improvement in bleeding also favored hysterectomy. Patient satisfaction was higher after hysterectomy within the 2 years follow-up (RR: 0.90, 95%CI: 0.86 to 0.94, I²:58%), however, there was no significant difference between the two groups at more than 2 years follow up. Sepsis (RR: 0.03, 95% CI 0.002 to 0.56; 1 study), wound infection (RR: 0.05, 95% CI: 0.01 to 0.28, I²: 0%, 3 studies) and Urinary tract infection (UTI) (RR: 0.20, 95% CI: 0.10 to 0.42, I²: 0%, 4 studies) all favoured hysteroscopic techniques. Fluid overload (RR: 7.80, 95% CI: 2.16 to 28.16, I² :0%, 4 studies) and perforation (RR: 5.42, 95% CI: 1.25 to 23.45, I²: 0%, 4 studies) however favoured hysterectomy in the short term. Conclusions: This meta-analysis has demonstrated that endometrial ablation and endometrial resection are both viable options when compared with hysterectomy for the treatment of heavy menstrual bleeding. Hysteroscopic procedures had better outcomes in the short term with fewer adverse events including wound infection, UTI and sepsis. The hysterectomy performed better when measuring more long-term impacts such as recurrence of symptoms, overall satisfaction at two years and the need for further treatment or surgery.

Keywords: menorrhagia, hysterectomy, ablation, resection

Procedia PDF Downloads 155
982 Creation of a Trust-Wide, Cross-Speciality, Virtual Teaching Programme for Doctors, Nurses and Allied Healthcare Professionals

Authors: Nelomi Anandagoda, Leanne J. Eveson

Abstract:

During the COVID-19 pandemic, the surge in in-patient admissions across the medical directorate of a district general hospital necessitated the implementation of an incident rota. Conscious of the impact on training and professional development, the idea of developing a virtual teaching programme was conceived. The programme initially aimed to provide junior doctors, specialist nurses, pharmacists, and allied healthcare professionals from medical specialties and those re-deployed from other specialties (e.g., ophthalmology, GP, surgery, psychiatry) the knowledge and skills to manage the deteriorating patient with COVID-19. The programme was later developed to incorporate the general internal medicine curriculum. To facilitate continuing medical education whilst maintaining social distancing during this period, a virtual platform was used to deliver teaching to junior doctors across two large district general hospitals and two community hospitals. Teaching sessions were recorded and uploaded to a common platform, providing a resource for participants to catch up on and re-watch teaching sessions, making strides towards reducing discrimination against the professional development of less than full-time trainees. Thus, creating a learning environment, which is inclusive and accessible to adult learners in a self-directed manner. The negative impact of the pandemic on the well-being of healthcare professionals is well documented. To support the multi-disciplinary team, the virtual teaching programme evolved to included sessions on well-being, resilience, and work-life balance. Providing teaching for learners across the multi-disciplinary team (MDT) has been an eye-opening experience. By challenging the concept that learners should only be taught within their own peer groups, the authors have fostered a greater appreciation of the strengths of the MDT and showcased the immense wealth of expertise available within the trust. The inclusive nature of the teaching and the ease of joining a virtual teaching session has facilitated the dissemination of knowledge across the MDT, thus improving patient care on the frontline. The weekly teaching programme has been running for over eight months, with ongoing engagement, interest, and participation. As described above, the teaching programme has evolved to accommodate the needs of its learners. It has received excellent feedback with an appreciation of its inclusive, multi-disciplinary, and holistic nature. The COVID-19 pandemic provided a catalyst to rapidly develop novel methods of working and training and widened access/exposure to the virtual technologies available to large organisations. By merging pedagogical expertise and technology, the authors have created an effective online learning environment. Although the authors do not propose to replace face-to-face teaching altogether, this model of virtual multidisciplinary team, cross-site teaching has proven to be a great leveler. It has made high-quality teaching accessible to learners of different confidence levels, grades, specialties, and working patterns.

Keywords: cross-site, cross-speciality, inter-disciplinary, multidisciplinary, virtual teaching

Procedia PDF Downloads 171
981 Private Coded Computation of Matrix Multiplication

Authors: Malihe Aliasgari, Yousef Nejatbakhsh

Abstract:

The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.

Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers

Procedia PDF Downloads 125
980 Governance Models of Higher Education Institutions

Authors: Zoran Barac, Maja Martinovic

Abstract:

Higher Education Institutions (HEIs) are a special kind of organization, with its unique purpose and combination of actors. From the societal point of view, they are central institutions in the society that are involved in the activities of education, research, and innovation. At the same time, their societal function derives complex relationships between involved actors, ranging from students, faculty and administration, business community and corporate partners, government agencies, to the general public. HEIs are also particularly interesting as objects of governance research because of their unique public purpose and combination of stakeholders. Furthermore, they are the special type of institutions from an organizational viewpoint. HEIs are often described as “loosely coupled systems” or “organized anarchies“ that implies the challenging nature of their governance models. Governance models of HEIs describe roles, constellations, and modes of interaction of the involved actors in the process of strategic direction and holistic control of institutions, taking into account each particular context. Many governance models of the HEIs are primarily based on the balance of power among the involved actors. Besides the actors’ power and influence, leadership style and environmental contingency could impact the governance model of an HEI. Analyzing them through the frameworks of institutional and contingency theories, HEI governance models originate as outcomes of their institutional and contingency adaptation. HEIs tend to fit to institutional context comprised of formal and informal institutional rules. By fitting to institutional context, HEIs are converging to each other in terms of their structures, policies, and practices. On the other hand, contingency framework implies that there is no governance model that is suitable for all situations. Consequently, the contingency approach begins with identifying contingency variables that might impact a particular governance model. In order to be effective, the governance model should fit to contingency variables. While the institutional context creates converging forces on HEI governance actors and approaches, contingency variables are the causes of divergence of actors’ behavior and governance models. Finally, an HEI governance model is a balanced adaptation of the HEIs to the institutional context and contingency variables. It also encompasses roles, constellations, and modes of interaction of involved actors influenced by institutional and contingency pressures. Actors’ adaptation to the institutional context brings benefits of legitimacy and resources. On the other hand, the adaptation of the actors’ to the contingency variables brings high performance and effectiveness. HEI governance models outlined and analyzed in this paper are collegial, bureaucratic, entrepreneurial, network, professional, political, anarchical, cybernetic, trustee, stakeholder, and amalgam models.

Keywords: governance, governance models, higher education institutions, institutional context, situational context

Procedia PDF Downloads 337
979 Festival Gamification: Conceptualization and Scale Development

Authors: Liu Chyong-Ru, Wang Yao-Chin, Huang Wen-Shiung, Tang Wan-Ching

Abstract:

Although gamification has been concerned and applied in the tourism industry, limited literature could be found in tourism academy. Therefore, to contribute knowledge in festival gamification, it becomes essential to start by establishing a Festival Gamification Scale (FGS). This study defines festival gamification as the extent of a festival to involve game elements and game mechanisms. Based on self-determination theory, this study developed an FGS. Through the multi-study method, in study one, five FGS dimensions were sorted through literature review, followed by twelve in-depth interviews. A total of 296 statements were extracted from interviews and were later narrowed down to 33 items under six dimensions. In study two, 226 survey responses were collected from a cycling festival for exploratory factor analysis, resulting in twenty items under five dimensions. In study three, 253 survey responses were obtained from a marathon festival for confirmatory factor analysis, resulting in the final sixteen items under five dimensions. Then, results of criterion-related validity confirmed the positive effects of these five dimensions on flow experience. In study four, for examining the model extension of the developed five-dimensional 16-item FGS, which includes dimensions of relatedness, mastery, competence, fun, and narratives, cross-validation analysis was performed using 219 survey responses from a religious festival. For the tourism academy, the FGS could further be applied in other sub-fields such as destinations, theme parks, cruise trips, or resorts. The FGS serves as a starting point for examining the mechanism of festival gamification in changing tourists’ attitudes and behaviors. Future studies could work on follow-up studies of FGS by testing outcomes of festival gamification or examining moderating effects of enhancing outcomes of festival gamification. On the other hand, although the FGS has been tested in cycling, marathon, and religious festivals, the research settings are all in Taiwan. Cultural differences of FGS is another further direction for contributing knowledge in festival gamification. This study also contributes to several valuable practical implications. First, this FGS could be utilized in tourist surveys for evaluating the extent of gamification of a festival. Based on the results of the performance assessment by FGS, festival management organizations and festival planners could learn the relative scores among dimensions of FGS, and plan for future improvement of gamifying the festival. Second, the FGS could be applied in positioning a gamified festival. Festival management organizations and festival planners could firstly consider the features and types of their festival, and then gamify their festival based on investing resources in key FGS dimensions.

Keywords: festival gamification, festival tourism, scale development, self-determination theory

Procedia PDF Downloads 147
978 Building on Previous Microvalving Approaches for Highly Reliable Actuation in Centrifugal Microfluidic Platforms

Authors: Ivan Maguire, Ciprian Briciu, Alan Barrett, Dara Kervick, Jens Ducrèe, Fiona Regan

Abstract:

With the ever-increasing myriad of applications of which microfluidic devices are capable, reliable fluidic actuation development has remained fundamental to the success of these microfluidic platforms. There are a number of approaches which can be taken in order to integrate liquid actuation on microfluidic platforms, which can usually be split into two primary categories; active microvalves and passive microvalves. Active microvalves are microfluidic valves which require a physical parameter change by external, or separate interaction, for actuation to occur. Passive microvalves are microfluidic valves which don’t require external interaction for actuation due to the valve’s natural physical parameters, which can be overcome through sample interaction. The purpose of this paper is to illustrate how further improvements to past microvalve solutions can largely enhance systematic reliability and performance, with both novel active and passive microvalves demonstrated. Covered within this scope will be two alternative and novel microvalve solutions for centrifugal microfluidic platforms; a revamped pneumatic-dissolvable film active microvalve (PAM) strategy and a spray-on Sol-Gel based hydrophobic passive microvalve (HPM) approach. Both the PAM and the HPM mechanisms were demonstrated on a centrifugal microfluidic platform consisting of alternating layers of 1.5 mm poly(methyl methacrylate) (PMMA) (for reagent storage) sheets and ~150 μm pressure sensitive adhesive (PSA) (for microchannel fabrication) sheets. The PAM approach differs from previous SOLUBON™ dissolvable film methods by introducing a more reliable and predictable liquid delivery mechanism to microvalve site, thus significantly reducing premature activation. This approach has also shown excellent synchronicity when performed in a multiplexed form. The HPM method utilises a new spray-on and low curing temperature (70°C) sol-gel material. The resultant double layer coating comprises a PMMA adherent sol-gel as the bottom layer and an ultra hydrophobic silica nano-particles (SNPs) film as the top layer. The optimal coating was integrated to microfluidic channels with varying cross-sectional area for assessing microvalve burst frequencies consistency. It is hoped that these microvalving solutions, which can be easily added to centrifugal microfluidic platforms, will significantly improve automation reliability.

Keywords: centrifugal microfluidics, hydrophobic microvalves, lab-on-a-disc, pneumatic microvalves

Procedia PDF Downloads 189
977 Mechanical Characterization and CNC Rotary Ultrasonic Grinding of Crystal Glass

Authors: Ricardo Torcato, Helder Morais

Abstract:

The manufacture of crystal glass parts is based on obtaining the rough geometry by blowing and/or injection, generally followed by a set of manual finishing operations using cutting and grinding tools. The forming techniques used do not allow the obtainment, with repeatability, of parts with complex shapes and the finishing operations use intensive specialized labor resulting in high cycle times and production costs. This work aims to explore the digital manufacture of crystal glass parts by investigating new subtractive techniques for the automated, flexible finishing of these parts. Finishing operations are essential to respond to customer demands in terms of crystal feel and shine. It is intended to investigate the applicability of different computerized finishing technologies, namely milling and grinding in a CNC machining center with or without ultrasonic assistance, to crystal processing. Research in the field of grinding hard and brittle materials, despite not being extensive, has increased in recent years, and scientific knowledge about the machinability of crystal glass is still very limited. However, it can be said that the unique properties of glass, such as high hardness and very low toughness, make any glass machining technology a very challenging process. This work will measure the performance improvement brought about by the use of ultrasound compared to conventional crystal grinding. This presentation is focused on the mechanical characterization and analysis of the cutting forces in CNC machining of superior crystal glass (Pb ≥ 30%). For the mechanical characterization, the Vickers hardness test provides an estimate of the material hardness (Hv) and the fracture toughness based on cracks that appear in the indentation. Mechanical impulse excitation test estimates the Young’s Modulus, shear modulus and Poisson ratio of the material. For the cutting forces, it a dynamometer was used to measure the forces in the face grinding process. The tests were made based on the Taguchi method to correlate the input parameters (feed rate, tool rotation speed and depth of cut) with the output parameters (surface roughness and cutting forces) to optimize the process (better roughness using the cutting forces that do not compromise the material structure and the tool life) using ANOVA. This study was conducted for conventional grinding and for the ultrasonic grinding process with the same cutting tools. It was possible to determine the optimum cutting parameters for minimum cutting forces and for minimum surface roughness in both grinding processes. Ultrasonic-assisted grinding provides a better surface roughness than conventional grinding.

Keywords: CNC machining, crystal glass, cutting forces, hardness

Procedia PDF Downloads 155
976 Critical Core Skills Profiling in the Singaporean Workforce

Authors: Bi Xiao Fang, Tan Bao Zhen

Abstract:

Soft skills, core competencies, and generic competencies are exchangeable terminologies often used to represent a similar concept. In the Singapore context, such skills are currently being referred to as Critical Core Skills (CCS). In 2019, SkillsFuture Singapore (SSG) reviewed the Generic Skills and Competencies (GSC) framework that was first introduced in 2016, culminating in the development of the Critical Core Skills (CCS) framework comprising 16 soft skills classified into three clusters. The CCS framework is part of the Skills Framework, and whose stated purpose is to create a common skills language for individuals, employers and training providers. It is also developed with the objectives of building deep skills for a lean workforce, enhance business competitiveness and support employment and employability. This further helps to facilitate skills recognition and support the design of training programs for skills and career development. According to SSG, every job role requires a set of technical skills and a set of Critical Core Skills to perform well at work, whereby technical skills refer to skills required to perform key tasks of the job. There has been an increasing emphasis on soft skills for the future of work. A recent study involving approximately 80 organizations across 28 sectors in Singapore revealed that more enterprises are beginning to recognize that soft skills support their employees’ performance and business competitiveness. Though CCS is of high importance for the development of the workforce’s employability, there is little attention paid to the CCS use and profiling across occupations. A better understanding of how CCS is distributed across the economy will thus significantly enhance SSG’s career guidance services as well as training providers’ services to graduates and workers and guide organizations in their hiring for soft skills. This CCS profiling study sought to understand how CCS is demanded in different occupations. To achieve its research objectives, this study adopted a quantitative method to measure CCS use across different occupations in the Singaporean workforce. Based on the CCS framework developed by SSG, the research team adopted a formative approach to developing the CCS profiling tool to measure the importance of and self-efficacy in the use of CCS among the Singaporean workforce. Drawing on the survey results from 2500 participants, this study managed to profile them into seven occupation groups based on the different patterns of importance and confidence levels of the use of CCS. Each occupation group is labeled according to the most salient and demanded CCS. In the meantime, the CCS in each occupation group, which may need some further strengthening, were also identified. The profiling of CCS use has significant implications for different stakeholders, e.g., employers could leverage the profiling results to hire the staff with the soft skills demanded by the job.

Keywords: employability, skills profiling, skills measurement, soft skills

Procedia PDF Downloads 96
975 The Significance of Picture Mining in the Fashion and Design as a New Research Method

Authors: Katsue Edo, Yu Hiroi

Abstract:

T Increasing attention has been paid to using pictures and photographs in research since the beginning of the 21th century in social sciences. Meanwhile we have been studying the usefulness of Picture mining, which is one of the new ways for a these picture using researches. Picture Mining is an explorative research analysis method that takes useful information from pictures, photographs and static or moving images. It is often compared with the methods of text mining. The Picture Mining concept includes observational research in the broad sense, because it also aims to analyze moving images (Ochihara and Edo 2013). In the recent literature, studies and reports using pictures are increasing due to the environmental changes. These are identified as technological and social changes (Edo et.al. 2013). Low price digital cameras and i-phones, high information transmission speed, low costs for information transferring and high performance and resolution of the cameras of mobile phones have changed the photographing behavior of people. Consequently, there is less resistance in taking and processing photographs for most of the people in the developing countries. In these studies, this method of collecting data from respondents is often called as ‘participant-generated photography’ or ‘respondent-generated visual imagery’, which focuses on the collection of data and its analysis (Pauwels 2011, Snyder 2012). But there are few systematical and conceptual studies that supports it significance of these methods. We have discussed in the recent years to conceptualize these picture using research methods and formalize theoretical findings (Edo et. al. 2014). We have identified the most efficient fields of Picture mining in the following areas inductively and in case studies; 1) Research in Consumer and Customer Lifestyles. 2) New Product Development. 3) Research in Fashion and Design. Though we have found that it will be useful in these fields and areas, we must verify these assumptions. In this study we will focus on the field of fashion and design, to determine whether picture mining methods are really reliable in this area. In order to do so we have conducted an empirical research of the respondents’ attitudes and behavior concerning pictures and photographs. We compared the attitudes and behavior of pictures toward fashion to meals, and found out that taking pictures of fashion is not as easy as taking meals and food. Respondents do not often take pictures of fashion and upload their pictures online, such as Facebook and Instagram, compared to meals and food because of the difficulty of taking them. We concluded that we should be more careful in analyzing pictures in the fashion area for there still might be some kind of bias existing even if the environment of pictures have drastically changed in these years.

Keywords: empirical research, fashion and design, Picture Mining, qualitative research

Procedia PDF Downloads 363