Search results for: processing individual
6330 Interpreting Possibilities: Teaching Without Borders
Authors: Mira Kadric
Abstract:
The proposed paper deals with a new developed approach for interpreting teaching, combining traditional didactics with a new element. The fundamental principle of the approach is taken from the theatre pedagogy (Augusto Boal`s Theatre of the Oppressed) and includes the discussion on social power relations. From the point of view of education sociology this implies strengthening students’ individual potential for self-determination on a number of levels, especially in view of the present increase in social responsibility. This knowledge constitutes a starting point and basis for the process of self-determined action. This takes place in the context of a creative didactic policy which identifies didactic goals, provides clear sequences of content, specifies interdisciplinary methods and examines their practical adequacy and ultimately serves not only individual translators and interpreters, but all parties involved. The goal of the presented didactic model is to promote independent work and problem-solving strategies; this helps to develop creative potential and self-confident behaviour. It also conveys realistic knowledge of professional reality and thus also of the real socio-political and professional parameters involved. As well as providing a discussion of fundamental questions relevant to Translation and Interpreting Studies, this also serves to improve this interdisciplinary didactic approach which simulates interpreting reality and illustrates processes and strategies which (can) take place in real life. This idea is illustrated in more detail with methods taken from the Theatre of the Oppressed created by Augusto Boal. This includes examples from (dialogue) interpreting teaching based on documentation from recordings made in a seminar in the summer term 2014.Keywords: augusto boal, didactic model, interpreting teaching, theatre of the oppressed
Procedia PDF Downloads 4326329 A Qualitative Study Identifying the Complexities of Early Childhood Professionals' Use and Production of Data
Authors: Sara Bonetti
Abstract:
The use of quantitative data to support policies and justify investments has become imperative in many fields including the field of education. However, the topic of data literacy has only marginally touched the early care and education (ECE) field. In California, within the ECE workforce, there is a group of professionals working in policy and advocacy that use quantitative data regularly and whose educational and professional experiences have been neglected by existing research. This study aimed at analyzing these experiences in accessing, using, and producing quantitative data. This study utilized semi-structured interviews to capture the differences in educational and professional backgrounds, policy contexts, and power relations. The participants were three key professionals from county-level organizations and one working at a State Department to allow for a broader perspective at systems level. The study followed Núñez’s multilevel model of intersectionality. The key in Núñez’s model is the intersection of multiple levels of analysis and influence, from the individual to the system level, and the identification of institutional power dynamics that perpetuate the marginalization of certain groups within society. In a similar manner, this study looked at the dynamic interaction of different influences at individual, organizational, and system levels that might intersect and affect ECE professionals’ experiences with quantitative data. At the individual level, an important element identified was the participants’ educational background, as it was possible to observe a relationship between that and their positionality, both with respect to working with data and also with respect to their power within an organization and at the policy table. For example, those with a background in child development were aware of how their formal education failed to train them in the skills that are necessary to work in policy and advocacy, and especially to work with quantitative data, compared to those with a background in administration and/or business. At the organizational level, the interviews showed a connection between the participants’ position within the organization and their organization’s position with respect to others and their degree of access to quantitative data. This in turn affected their sense of empowerment and agency in dealing with data, such as shaping what data is collected and available. These differences reflected on the interviewees’ perceptions and expectations for the ECE workforce. For example, one of the interviewees pointed out that many ECE professionals happen to use data out of the necessity of the moment. This lack of intentionality is a cause for, and at the same time translates into missed training opportunities. Another interviewee pointed out issues related to the professionalism of the ECE workforce by remarking the inadequacy of ECE students’ training in working with data. In conclusion, Núñez’s model helped understand the different elements that affect ECE professionals’ experiences with quantitative data. In particular, what was clear is that these professionals are not being provided with the necessary support and that we are not being intentional in creating data literacy skills for them, despite what is asked of them and their work.Keywords: data literacy, early childhood professionals, intersectionality, quantitative data
Procedia PDF Downloads 2536328 The Relation between Cognitive Fluency and Utterance Fluency in Second Language Spoken Fluency: Studying Fluency through a Psycholinguistic Lens
Authors: Tannistha Dasgupta
Abstract:
This study explores the aspects of second language (L2) spoken fluency that are related to L2 linguistic knowledge and processing skill. It draws on Levelt’s ‘blueprint’ of the L2 speaker which discusses the cognitive issues underlying the act of speaking. However, L2 speaking assessments have largely neglected the underlying mechanism involved in language production; emphasis is given on the relationship between subjective ratings of L2 speech sample and objectively measured aspects of fluency. Hence, in this study, the relation between L2 linguistic knowledge and processing skill i.e. Cognitive Fluency (CF), and objectively measurable aspects of L2 spoken fluency i.e. Utterance Fluency (UF) is examined. The participants of the study are L2 learners of English, studying at high school level in Hyderabad, India. 50 participants with intermediate level of proficiency in English performed several lexical retrieval tasks and attention-shifting tasks to measure CF, and 8 oral tasks to measure UF. Each aspect of UF (speed, pause, and repair) were measured against the scores of CF to find out those aspects of UF which are reliable indicators of CF. Quantitative analysis of the data shows that among the three aspects of UF; speed is the best predictor of CF, and pause is weakly related to CF. The study suggests that including the speed aspect of UF could make L2 fluency assessment more reliable, valid, and objective. Thus, incorporating the assessment of psycholinguistic mechanisms into L2 spoken fluency testing, could result in fairer evaluation.Keywords: attention-shifting, cognitive fluency, lexical retrieval, utterance fluency
Procedia PDF Downloads 7116327 Digitalisation of the Railway Industry: Recent Advances in the Field of Dialogue Systems: Systematic Review
Authors: Andrei Nosov
Abstract:
This paper discusses the development directions of dialogue systems within the digitalisation of the railway industry, where technologies based on conversational AI are already potentially applied or will be applied. Conversational AI is one of the popular natural language processing (NLP) tasks, as it has great prospects for real-world applications today. At the same time, it is a challenging task as it involves many areas of NLP based on complex computations and deep insights from linguistics and psychology. In this review, we focus on dialogue systems and their implementation in the railway domain. We comprehensively review the state-of-the-art research results on dialogue systems and analyse them from three perspectives: type of problem to be solved, type of model, and type of system. In particular, from the perspective of the type of tasks to be solved, we discuss characteristics and applications. This will help to understand how to prioritise tasks. In terms of the type of models, we give an overview that will allow researchers to become familiar with how to apply them in dialogue systems. By analysing the types of dialogue systems, we propose an unconventional approach in contrast to colleagues who traditionally contrast goal-oriented dialogue systems with open-domain systems. Our view focuses on considering retrieval and generative approaches. Furthermore, the work comprehensively presents evaluation methods and datasets for dialogue systems in the railway domain to pave the way for future research. Finally, some possible directions for future research are identified based on recent research results.Keywords: digitalisation, railway, dialogue systems, conversational AI, natural language processing, natural language understanding, natural language generation
Procedia PDF Downloads 636326 Homosexuality and Culture: A Case Study Depicting the Struggles of a Married Lady
Authors: Athulya Jayakumar, M. Manjula
Abstract:
Though there has been a shift in the understanding of homosexuality from being a sin, crime or pathology in the medical and legal perspectives, the acceptance of homosexuality still remains very scanty in the Indian subcontinent. The present case study is a 24-year-old female who has completed a diploma in polytechnic engineering and residing in the state of Kerala. She initially presented with her husband with complaints of lack of sexual desire and non-cooperation from the index client. After an initial few sessions, the client revealed, in an individual session, about her homosexual orientation which was unknown to her family. She has had multiple short-term relations with females and never had any heterosexual orientation/interest. During her adolescence, she was wondering if she could change herself into a male. However, currently, she accepts her gender. She never wanted a heterosexual marriage; but, had to succumb to the pressure of mother, as a result of a series of unexpected incidents at home and had to agree for the marriage, also with a hope that she may change herself into a bi-sexual. The client was able to bond with the husband emotionally but the multiple attempts at sexual intercourse, at the insistence of the husband, had always been non-pleasurable and induced a sense of disgust. Currently, for several months, there has not been any sexual activity. Also, she actively avoids any chance to have a warm communication with him so that she can avoid chances of him approaching her in a sexual manner. The case study is an attempt to highlight the culture and the struggles of a homosexual individual who comes to therapy for wanting to be a ‘normal wife’ despite having knowledge of legal rights and scenario. There is a scarcity of Indian literature that has systematically investigated issues related to homosexuality. Data on prevalence, emotional problems faced and clinical services available are sparse though it is crucial for increasing understanding of sexual behaviour, orientation and difficulties faced in India.Keywords: case study, culture, cognitive behavior therapy, female homosexuality
Procedia PDF Downloads 3456325 Early Marriage and Women's Empowerment: The Case of Chil-bride in East Hararghe Zone of Oromia National Regional State, Ethiopia
Authors: Emad Mohammed Sani
Abstract:
Women encounter exclusion and discrimination in varying degrees, particularly those who marry as minors. The detrimental custom of getting married young is still prevalent worldwide and affects millions of people. It has been less common over time, although it is still widespread in underdeveloped nations. Oromia Regional State is the region in Ethiopia with the highest proportion of child brides. This study aimed at evaluating the effects of early marriage on its survivors’ life conditions – specifically, empowerment and household decision-making – in Eastern Hararghe Zone of Oromia Region. This study employed community-based cross-sectional study design. It adopted mixed method approach – survey, in-depth interview and focus group discussion (FGD) – to collect, analyses and interpret data on early marriage and its effects on household decision-making processes. Narratives and analytical descriptions were integrated to substantiate and/or explain observed quantitative results, or generate contextual themes. According to this study, married women who were married at or after the age of eighteen participated more in household decision-making than child brides. Child brides were more likely to be victims of violence and other types of spousal abuse in their marriages. These changes are mostly caused by an individual's age at first marriage. Delaying marriage had a large positive impact on women's empowerment at the household level, and age at first marriage had a considerable negative impact. In order to advance women's welfare and emancipation, we advise more research to concentrate on the relationship between the home and the social-structural forms that appear at the individual and communal levels.Keywords: child-bride, early marriage, women, ethiopia
Procedia PDF Downloads 666324 Investigating the Morphological Patterns of Lip Prints and Their Effectiveness in Individualization and Gender Determination in Pakistani Population
Authors: Makhdoom Saad Wasim Ghouri, Muneeba Butt, Mohammad Ashraf Tahir, Rashid Bhatti, Akbar Ali, Abdul Rehman, Abdul Basit, Muzzamel Rehman, Shahbaz Aslam, Farakh Mansoor, Ahmad Fayyaz, Hadia Siddiqui
Abstract:
Lip print analysis (Cheiloscopy) is the new emerging technique that might be the guardian angel in establishing the personal identity. Cheiloscopy is basically the study of elevations and depressions present on the external surface of the lips. In our study, 600 lip prints samples were taken (300 males and 300 females). Lip prints of each individual were divided into four quadrants and the upper middle portion. For general classification, middle part of the lower lip almost 10 mm wide would be taken into consideration. After analysis of lip-prints, our results show that lip prints are the unique and permanent character of every individual. No two lip print was matched with each other even of the identical twins. Our study reveals that there is equal distribution of lip print patterns among all the four quadrants of lips and the upper middle portion; these distributions were statistically analyzed by applying chi-square test which shows the significant results. In general classification, 5 lip print types/patterns were studied, Type 1 (Vertical lines), Type 2 (Branched pattern), Type 3 (Intersected pattern), Type 4 (Reticular pattern) and Type 5 (Undetermined). Type 1 and Type 2 were found to be the most frequent patterns in female population, while Type 3 and Type 4 most commonly found in male population. These results were also analyzed by applying Chi-square test, and the results show significance statistically. Thus, establishing sex determination on the basis of lip print types among the gender. Type 5 was the least common pattern among genders.Keywords: cheiloscopy, distribution, quadrants, sex determination
Procedia PDF Downloads 2986323 Predictive Analysis of Chest X-rays Using NLP and Large Language Models with the Indiana University Dataset and Random Forest Classifier
Authors: Azita Ramezani, Ghazal Mashhadiagha, Bahareh Sanabakhsh
Abstract:
This study researches the combination of Random. Forest classifiers with large language models (LLMs) and natural language processing (NLP) to improve diagnostic accuracy in chest X-ray analysis using the Indiana University dataset. Utilizing advanced NLP techniques, the research preprocesses textual data from radiological reports to extract key features, which are then merged with image-derived data. This improved dataset is analyzed with Random Forest classifiers to predict specific clinical results, focusing on the identification of health issues and the estimation of case urgency. The findings reveal that the combination of NLP, LLMs, and machine learning not only increases diagnostic precision but also reliability, especially in quickly identifying critical conditions. Achieving an accuracy of 99.35%, the model shows significant advancements over conventional diagnostic techniques. The results emphasize the large potential of machine learning in medical imaging, suggesting that these technologies could greatly enhance clinician judgment and patient outcomes by offering quicker and more precise diagnostic approximations.Keywords: natural language processing (NLP), large language models (LLMs), random forest classifier, chest x-ray analysis, medical imaging, diagnostic accuracy, indiana university dataset, machine learning in healthcare, predictive modeling, clinical decision support systems
Procedia PDF Downloads 446322 Sentiment Analysis of Chinese Microblog Comments: Comparison between Support Vector Machine and Long Short-Term Memory
Authors: Xu Jiaqiao
Abstract:
Text sentiment analysis is an important branch of natural language processing. This technology is widely used in public opinion analysis and web surfing recommendations. At present, the mainstream sentiment analysis methods include three parts: sentiment analysis based on a sentiment dictionary, based on traditional machine learning, and based on deep learning. This paper mainly analyzes and compares the advantages and disadvantages of the SVM method of traditional machine learning and the Long Short-term Memory (LSTM) method of deep learning in the field of Chinese sentiment analysis, using Chinese comments on Sina Microblog as the data set. Firstly, this paper classifies and adds labels to the original comment dataset obtained by the web crawler, and then uses Jieba word segmentation to classify the original dataset and remove stop words. After that, this paper extracts text feature vectors and builds document word vectors to facilitate the training of the model. Finally, SVM and LSTM models are trained respectively. After accuracy calculation, it can be obtained that the accuracy of the LSTM model is 85.80%, while the accuracy of SVM is 91.07%. But at the same time, LSTM operation only needs 2.57 seconds, SVM model needs 6.06 seconds. Therefore, this paper concludes that: compared with the SVM model, the LSTM model is worse in accuracy but faster in processing speed.Keywords: sentiment analysis, support vector machine, long short-term memory, Chinese microblog comments
Procedia PDF Downloads 946321 An Evaluation of Rational Approach to Management by Objectives in Construction Contracting Organisation
Authors: Zakir H. Shaik, Punam L. Vartak
Abstract:
Management By Objectives (MBO) is a management technique in which objectives of an organisation are conveyed to the employees to establish the individual goals. These objectives and goals are then monitored and assessed jointly by management and the employee time to time. This tool can be used for planning, monitoring as well as for performance appraisal. The success of an organisation is largely dependent on its’s Vision. Thus, it is of paramount importance to achieve the realm of vision through a mission which is well crafted within the organisation to address the objectives. The success of the mission depends upon how realistic and action oriented philosophical approach, an organisation caters to; and how the individual goals are set to track and meet the objectives. Thus, focused and passionate efforts of the team, assigned for the mission, are an absolute obligation for achieving the vision of any organisation. Any construction site is generally a controlled disorder having huge investments, resources and logistics involved. The Construction progression is time-consuming with many isolated as well as interconnected activities. Traditional MBO approach can be unsuccessful if planning and control is non-realistic and inflexible. Moreover, the Construction Industry is far behind understanding these concepts. It is important to address the employee engagement in defining and creating awareness to achieve the targets. Besides, current economic environment and competitive world demands refined management tools to achieve profit, growth and survival of the business. Therefore, the necessity of rational MBO becomes vital part towards the success of an organisation. This paper details about the philosophical assumptions to develop the grounded theory in lieu of achieving objectives through RATIONAL MBO approach in Construction Contracting Organisations. The goals and objectives of the Construction Contracting Organisations can be achieved efficiently by adopting this RATIONAL MBO approach, as those are based on realistic, logical and balanced assumptions.Keywords: growth, leadership, management by objectives, Management By Objectives (MBO), profit, rational
Procedia PDF Downloads 1536320 Profiling Risky Code Using Machine Learning
Authors: Zunaira Zaman, David Bohannon
Abstract:
This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties
Procedia PDF Downloads 1076319 Extracting the Coupled Dynamics in Thin-Walled Beams from Numerical Data Bases
Authors: Mohammad A. Bani-Khaled
Abstract:
In this work we use the Discrete Proper Orthogonal Decomposition transform to characterize the properties of coupled dynamics in thin-walled beams by exploiting numerical simulations obtained from finite element simulations. The outcomes of the will improve our understanding of the linear and nonlinear coupled behavior of thin-walled beams structures. Thin-walled beams have widespread usage in modern engineering application in both large scale structures (aeronautical structures), as well as in nano-structures (nano-tubes). Therefore, detailed knowledge in regard to the properties of coupled vibrations and buckling in these structures are of great interest in the research community. Due to the geometric complexity in the overall structure and in particular in the cross-sections it is necessary to involve computational mechanics to numerically simulate the dynamics. In using numerical computational techniques, it is not necessary to over simplify a model in order to solve the equations of motions. Computational dynamics methods produce databases of controlled resolution in time and space. These numerical databases contain information on the properties of the coupled dynamics. In order to extract the system dynamic properties and strength of coupling among the various fields of the motion, processing techniques are required. Time- Proper Orthogonal Decomposition transform is a powerful tool for processing databases for the dynamics. It will be used to study the coupled dynamics of thin-walled basic structures. These structures are ideal to form a basis for a systematic study of coupled dynamics in structures of complex geometry.Keywords: coupled dynamics, geometric complexity, proper orthogonal decomposition (POD), thin walled beams
Procedia PDF Downloads 4186318 European Food Safety Authority (EFSA) Safety Assessment of Food Additives: Data and Methodology Used for the Assessment of Dietary Exposure for Different European Countries and Population Groups
Authors: Petra Gergelova, Sofia Ioannidou, Davide Arcella, Alexandra Tard, Polly E. Boon, Oliver Lindtner, Christina Tlustos, Jean-Charles Leblanc
Abstract:
Objectives: To assess chronic dietary exposure to food additives in different European countries and population groups. Method and Design: The European Food Safety Authority’s (EFSA) Panel on Food Additives and Nutrient Sources added to Food (ANS) estimates chronic dietary exposure to food additives with the purpose of re-evaluating food additives that were previously authorized in Europe. For this, EFSA uses concentration values (usage and/or analytical occurrence data) reported through regular public calls for data by food industry and European countries. These are combined, at individual level, with national food consumption data from the EFSA Comprehensive European Food Consumption Database including data from 33 dietary surveys from 19 European countries and considering six different population groups (infants, toddlers, children, adolescents, adults and the elderly). EFSA ANS Panel estimates dietary exposure for each individual in the EFSA Comprehensive Database by combining the occurrence levels per food group with their corresponding consumption amount per kg body weight. An individual average exposure per day is calculated, resulting in distributions of individual exposures per survey and population group. Based on these distributions, the average and 95th percentile of exposure is calculated per survey and per population group. Dietary exposure is assessed based on two different sets of data: (a) Maximum permitted levels (MPLs) of use set down in the EU legislation (defined as regulatory maximum level exposure assessment scenario) and (b) usage levels and/or analytical occurrence data (defined as refined exposure assessment scenario). The refined exposure assessment scenario is sub-divided into the brand-loyal consumer scenario and the non-brand-loyal consumer scenario. For the brand-loyal consumer scenario, the consumer is considered to be exposed on long-term basis to the highest reported usage/analytical level for one food group, and at the mean level for the remaining food groups. For the non-brand-loyal consumer scenario, the consumer is considered to be exposed on long-term basis to the mean reported usage/analytical level for all food groups. An additional exposure from sources other than direct addition of food additives (i.e. natural presence, contaminants, and carriers of food additives) is also estimated, as appropriate. Results: Since 2014, this methodology has been applied in about 30 food additive exposure assessments conducted as part of scientific opinions of the EFSA ANS Panel. For example, under the non-brand-loyal scenario, the highest 95th percentile of exposure to α-tocopherol (E 307) and ammonium phosphatides (E 442) was estimated in toddlers up to 5.9 and 8.7 mg/kg body weight/day, respectively. The same estimates under the brand-loyal scenario in toddlers resulted in exposures of 8.1 and 20.7 mg/kg body weight/day, respectively. For the regulatory maximum level exposure assessment scenario, the highest 95th percentile of exposure to α-tocopherol (E 307) and ammonium phosphatides (E 442) was estimated in toddlers up to 11.9 and 30.3 mg/kg body weight/day, respectively. Conclusions: Detailed and up-to-date information on food additive concentration values (usage and/or analytical occurrence data) and food consumption data enable the assessment of chronic dietary exposure to food additives to more realistic levels.Keywords: α-tocopherol, ammonium phosphatides, dietary exposure assessment, European Food Safety Authority, food additives, food consumption data
Procedia PDF Downloads 3256317 Modeling Average Paths Traveled by Ferry Vessels Using AIS Data
Authors: Devin Simmons
Abstract:
At the USDOT’s Bureau of Transportation Statistics, a biannual census of ferry operators in the U.S. is conducted, with results such as route mileage used to determine federal funding levels for operators. AIS data allows for the possibility of using GIS software and geographical methods to confirm operator-reported mileage for individual ferry routes. As part of the USDOT’s work on the ferry census, an algorithm was developed that uses AIS data for ferry vessels in conjunction with known ferry terminal locations to model the average route travelled for use as both a cartographic product and confirmation of operator-reported mileage. AIS data from each vessel is first analyzed to determine individual journeys based on the vessel’s velocity, and changes in velocity over time. These trips are then converted to geographic linestring objects. Using the terminal locations, the algorithm then determines whether the trip represented a known ferry route. Given a large enough dataset, routes will be represented by multiple trip linestrings, which are then filtered by DBSCAN spatial clustering to remove outliers. Finally, these remaining trips are ready to be averaged into one route. The algorithm interpolates the point on each trip linestring that represents the start point. From these start points, a centroid is calculated, and the first point of the average route is determined. Each trip is interpolated again to find the point that represents one percent of the journey’s completion, and the centroid of those points is used as the next point in the average route, and so on until 100 points have been calculated. Routes created using this algorithm have shown demonstrable improvement over previous methods, which included the implementation of a LOESS model. Additionally, the algorithm greatly reduces the amount of manual digitizing needed to visualize ferry activity.Keywords: ferry vessels, transportation, modeling, AIS data
Procedia PDF Downloads 1766316 Automatic Classification of Lung Diseases from CT Images
Authors: Abobaker Mohammed Qasem Farhan, Shangming Yang, Mohammed Al-Nehari
Abstract:
Pneumonia is a kind of lung disease that creates congestion in the chest. Such pneumonic conditions lead to loss of life of the severity of high congestion. Pneumonic lung disease is caused by viral pneumonia, bacterial pneumonia, or Covidi-19 induced pneumonia. The early prediction and classification of such lung diseases help to reduce the mortality rate. We propose the automatic Computer-Aided Diagnosis (CAD) system in this paper using the deep learning approach. The proposed CAD system takes input from raw computerized tomography (CT) scans of the patient's chest and automatically predicts disease classification. We designed the Hybrid Deep Learning Algorithm (HDLA) to improve accuracy and reduce processing requirements. The raw CT scans have pre-processed first to enhance their quality for further analysis. We then applied a hybrid model that consists of automatic feature extraction and classification. We propose the robust 2D Convolutional Neural Network (CNN) model to extract the automatic features from the pre-processed CT image. This CNN model assures feature learning with extremely effective 1D feature extraction for each input CT image. The outcome of the 2D CNN model is then normalized using the Min-Max technique. The second step of the proposed hybrid model is related to training and classification using different classifiers. The simulation outcomes using the publically available dataset prove the robustness and efficiency of the proposed model compared to state-of-art algorithms.Keywords: CT scan, Covid-19, deep learning, image processing, lung disease classification
Procedia PDF Downloads 1556315 MXene-Based Self-Sensing of Damage in Fiber Composites
Authors: Latha Nataraj, Todd Henry, Micheal Wallock, Asha Hall, Christine Hatter, Babak Anasori, Yury Gogotsi
Abstract:
Multifunctional composites with enhanced strength and toughness for superior damage tolerance are essential for advanced aerospace and military applications. Detection of structural changes prior to visible damage may be achieved by incorporating fillers with tunable properties such as two-dimensional (2D) nanomaterials with high aspect ratios and more surface-active sites. While 2D graphene with large surface areas, good mechanical properties, and high electrical conductivity seems ideal as a filler, the single-atomic thickness can lead to bending and rolling during processing, requiring post-processing to bond to polymer matrices. Lately, an emerging family of 2D transition metal carbides and nitrides, MXenes, has attracted much attention since their discovery in 2011. Metallic electronic conductivity and good mechanical properties, even with increased polymer content, coupled with hydrophilicity make MXenes a good candidate as a filler material in polymer composites and exceptional as multifunctional damage indicators in composites. Here, we systematically study MXene-based (Ti₃C₂) coated on glass fibers for fiber reinforced polymer composite for self-sensing using microscopy and micromechanical testing. Further testing is in progress through the investigation of local variations in optical, acoustic, and thermal properties within the damage sites in response to strain caused by mechanical loading.Keywords: damage sensing, fiber composites, MXene, self-sensing
Procedia PDF Downloads 1216314 Challenges and Professional Perspectives for Pedagogy Undergraduates with Specific Learning Disability: A Greek Case Study
Authors: Tatiani D. Mousoura
Abstract:
Specific learning disability (SLD) in higher education has been partially explored in Greece so far. Moreover, opinions on professional perspectives for university students with SLD, is scarcely encountered in Greek research. The perceptions of the hidden character of SLD along with the university policy towards it and professional perspectives that result from this policy have been examined in the present research. This study has applied the paradigm of a Greek Tertiary Pedagogical Education Department (Early Childhood Education). Via mixed methods, data have been collected from different groups of people in the Pedagogical Department: students with SLD and without SLD, academic staff and administration staff, all of which offer the opportunity for triangulation of the findings. Qualitative methods include ten interviews with students with SLD and 15 interviews with academic staff and 60 hours of observation of the students with SLD. Quantitative methods include 165 questionnaires completed by third and fourth-year students and five questionnaires completed by the administration staff. Thematic analyses of the interviews’ data and descriptive statistics on the questionnaires’ data have been applied for the processing of the results. The use of medical terms to define and understand SLD was common in the student cohort, regardless of them having an SLD diagnosis. However, this medical model approach is far more dominant in the group of students without SLD who, by majority, hold misconceptions on a definitional level. The academic staff group seems to be leaning towards a social approach concerning SLD. According to them, diagnoses may lead to social exclusion. The Pedagogical Department generally endorses the principles of inclusion and complies with the provision of oral exams for students with SLD. Nevertheless, in practice, there seems to be a lack of regular academic support for these students. When such support does exist, it is only through individual initiatives. With regards to their prospective profession, students with SLD can utilize their personal experience, as well as their empathy; these appear to be unique weapons in their hands –in comparison with other educators− when it comes to teaching students in the future. In the Department of Pedagogy, provision towards SLD results sporadic, however the vision of an inclusive department does exist. Based on their studies and their experience, pedagogy students with SLD claim that they have an experiential internalized advantage for their future career as educators.Keywords: specific learning disability, SLD, dyslexia, pedagogy department, inclusion, professional role of SLDed educators, higher education, university policy
Procedia PDF Downloads 1136313 Assessing the Double Burden of Malnutrition in Moroccan Women: A Focus on Iron Deficiency and Weight Disorders
Authors: Fall Abdourahmane, Lazrak Meryem, El Hsaini Houda, El Ammari Laila, Gamih Hasnae, Yahyane Abdelhakim, Benjouad Abdelaziz, Aguenaou Hassan, El Kari Khalid
Abstract:
Introduction: The double burden of malnutrition (DBM), defined by the concurrent occurrence of undernutrition and overnutrition, represents a critical public health issue, particularly in low- and middle-income countries. In Morocco, 61.3% of women of reproductive age (WRA) are classified as overweight or obese, with 30.4% meeting the criteria for obesity. Furthermore, 34.4% of WRA are affected by anaemia, and 49.7% present with iron deficiency anaemia. Objective: The objective of this study is to assess the individual-level prevalence of the double burden of malnutrition (DBM) among Moroccan WRA, focusing on the simultaneous presence of iron deficiency anaemia and overweight/obesity. Methods: A national cross-sectional survey was carried out on a representative sample of 2090 Moroccan WRA. The data collected encompassed blood samples, anthropometric measurements and socio-economic factors. Haemoglobin levels were assessed using a Hemocue device, while ferritin and CRP levels were determined through immunoturbidimetric analysis. Results: The prevalence of overweight/obesity among WRA in Morocco was 60.2%, iron deficiency affected 30.6%, anaemia was found in 34.4%, and 50.0% had iron deficiency anaemia. The coexistence of overweight/obesity with anaemia was observed in 19.2% and with iron deficiency in 16.3%. Among overweight/obese women, 32.5% were anaemic, 28.4% had iron deficiency, and 47.6% had iron deficiency anaemia. The prevalence of DBM was higher in urban areas compared to rural settings. Conclusion: The DBM among women of WRA showed an emergent reality of the interconnection and the coexistence at individual level of the undernutrition and the overnutrition. Therefore, effective and dual actions that could simultaneously address the double dimension of the DBM have to be implemented for the policy solutions to be successful.Keywords: the double burden of malnutrition, iron deficiency anemia, overweight, obesity
Procedia PDF Downloads 206312 Exploring the Stressors Faced by Sportspersons: A Qualitative Study on Young Indian Sportspersons and Their Coping Strategies to Stress
Authors: Moyera Sanganeria
Abstract:
In the highly competitive landscape of contemporary sports, sportspersons worldwide encounter formidable challenges, often practicing for extensive hours while contending with limited social and physical resources. A growing number of sportspersons globally are sharing their struggles with depression, anxiety, and stress arising from the complex journey and identity associated with being a sportsperson. This qualitative study aims to investigate the challenges faced by sportspersons in individual versus team sports and explore potential gender-based variations in coping strategies. It attempts to do so by recognizing the imperative to comprehend the root causes and coping mechanisms for these stressors. By employing purposive sampling, MMA and Kabaddi players from training academies across Mumbai were selected for the study. Twelve participants were interviewed through semi-structured interviews guided by an interview guide. Reflective thematic analysis was employed to discern diverse stressors and coping strategies. Key stressors encountered by young Indian sportspersons encompass injuries, socio-economic challenges, financial constraints, escalating competition, and performance anxiety. Notably, individuals engaged in team sports tended to adopt emotion-focused coping mechanisms, while those in individual sports leaned more towards problem-focused coping strategies in response to these stressors. There were no prominent gender differences found in coping strategies employed by sportspersons. This study underscores the critical issue of declining mental health among sportspersons in India, emphasizing the necessity for a structured and customized mental health intervention strategy tailored to the unique needs of this population.Keywords: stressors, coping strategies, sports psychology, sportspersons, mental health
Procedia PDF Downloads 806311 People Who Live in Poverty Usually Do So Due to Circumstances Far Beyond Their Control: A Multiple Case Study on Poverty Simulation Events
Authors: Tracy Smith-Carrier
Abstract:
Burgeoning research extols the benefits of innovative experiential learning activities to increase participants’ engagement, enhance their individual learning, and bridge the gap between theory and practice. This presentation discusses findings from a multiple case study on poverty simulation events conducted with two samples: undergraduate students and community participants. After exploring the nascent research on the benefits and limitations of poverty simulation activities, the study explores whether participating in a poverty simulation resulted in changes to participants’ beliefs about the causes and effects of poverty, as well as shifts in their attitudes and actions toward people experiencing poverty. For the purposes of triangulation, quantitative and qualitative data from a variety of sources were analyzed: participant feedback surveys, qualitative responses, and pre, post, and follow-up questionnaires. Findings show statistically significant results (p<.05) from both samples on cumulative scores of the modified Attitudes Toward Poverty Scale, indicating an improvement in participants’ attitudes toward poverty. Although generally positive about their experiences, participating in the simulation did not appear to have prompted participants to take specific actions to reduce poverty. Conclusions drawn from the research study suggest that poverty simulation planners should be wary of adopting scenarios that emphasize, or fail to adequately contextualize, behaviours or responses that might perpetuate individual explanations of poverty. Moreover, organizers must carefully consider how to ensure participants in their audience currently experiencing low-income do not become emotionally distressed, triggered or further marginalized in the process. While overall participants were positive about their experiences in the simulation, the events did not appear to have prompted them to action. Moving beyond the goal of increasing participants’ understandings of poverty, interventions that foster greater engagement in poverty issues over the long-term are necessary.Keywords: empathy, experiential learning, poverty awareness, poverty simulation
Procedia PDF Downloads 2676310 Mobile Augmented Reality for Collaboration in Operation
Authors: Chong-Yang Qiao
Abstract:
Mobile augmented reality (MAR) tracking targets from the surroundings and aids operators for interactive data and procedures visualization, potential equipment and system understandably. Operators remotely communicate and coordinate with each other for the continuous tasks, information and data exchange between control room and work-site. In the routine work, distributed control system (DCS) monitoring and work-site manipulation require operators interact in real-time manners. The critical question is the improvement of user experience in cooperative works through applying Augmented Reality in the traditional industrial field. The purpose of this exploratory study is to find the cognitive model for the multiple task performance by MAR. In particular, the focus will be on the comparison between different tasks and environment factors which influence information processing. Three experiments use interface and interaction design, the content of start-up, maintenance and stop embedded in the mobile application. With the evaluation criteria of time demands and human errors, and analysis of the mental process and the behavior action during the multiple tasks, heuristic evaluation was used to find the operators performance with different situation factors, and record the information processing in recognition, interpretation, judgment and reasoning. The research will find the functional properties of MAR and constrain the development of the cognitive model. Conclusions can be drawn that suggest MAR is easy to use and useful for operators in the remote collaborative works.Keywords: mobile augmented reality, remote collaboration, user experience, cognition model
Procedia PDF Downloads 1976309 Unlocking the Future of Grocery Shopping: Graph Neural Network-Based Cold Start Item Recommendations with Reverse Next Item Period Recommendation (RNPR)
Authors: Tesfaye Fenta Boka, Niu Zhendong
Abstract:
Recommender systems play a crucial role in connecting individuals with the items they require, as is particularly evident in the rapid growth of online grocery shopping platforms. These systems predominantly rely on user-centered recommendations, where items are suggested based on individual preferences, garnering considerable attention and adoption. However, our focus lies on the item-centered recommendation task within the grocery shopping context. In the reverse next item period recommendation (RNPR) task, we are presented with a specific item and challenged to identify potential users who are likely to consume it in the upcoming period. Despite the ever-expanding inventory of products on online grocery platforms, the cold start item problem persists, posing a substantial hurdle in delivering personalized and accurate recommendations for new or niche grocery items. To address this challenge, we propose a Graph Neural Network (GNN)-based approach. By capitalizing on the inherent relationships among grocery items and leveraging users' historical interactions, our model aims to provide reliable and context-aware recommendations for cold-start items. This integration of GNN technology holds the promise of enhancing recommendation accuracy and catering to users' individual preferences. This research contributes to the advancement of personalized recommendations in the online grocery shopping domain. By harnessing the potential of GNNs and exploring item-centered recommendation strategies, we aim to improve the overall shopping experience and satisfaction of users on these platforms.Keywords: recommender systems, cold start item recommendations, online grocery shopping platforms, graph neural networks
Procedia PDF Downloads 906308 Estimation of Greenhouse Gas (GHG) Reductions from Solar Cell Technology Using Bottom-up Approach and Scenario Analysis in South Korea
Authors: Jaehyung Jung, Kiman Kim, Heesang Eum
Abstract:
Solar cell is one of the main technologies to reduce greenhouse gas (GHG). Thereby, accurate estimation of greenhouse gas reduction by solar cell technology is crucial to consider strategic applications of the solar cell. The bottom-up approach using operating data such as operation time and efficiency is one of the methodologies to improve the accuracy of the estimation. In this study, alternative GHG reductions from solar cell technology were estimated by a bottom-up approach to indirect emission source (scope 2) in Korea, 2015. In addition, the scenario-based analysis was conducted to assess the effect of technological change with respect to efficiency improvement and rate of operation. In order to estimate GHG reductions from solar cell activities in operating condition levels, methodologies were derived from 2006 IPCC guidelines for national greenhouse gas inventories and guidelines for local government greenhouse inventories published in Korea, 2016. Indirect emission factors for electricity were obtained from Korea Power Exchange (KPX) in 2011. As a result, the annual alternative GHG reductions were estimated as 21,504 tonCO2eq, and the annual average value was 1,536 tonCO2eq per each solar cell technology. Those results of estimation showed to be 91% levels versus design of capacity. Estimation of individual greenhouse gases (GHGs) showed that the largest gas was carbon dioxide (CO2), of which up to 99% of the total individual greenhouse gases. The annual average GHG reductions from solar cell per year and unit installed capacity (MW) were estimated as 556 tonCO2eq/yr•MW. Scenario analysis of efficiency improvement by 5%, 10%, 15% increased as much as approximately 30, 61, 91%, respectively, and rate of operation as 100% increased 4% of the annual GHG reductions.Keywords: bottom-up approach, greenhouse gas (GHG), reduction, scenario, solar cell
Procedia PDF Downloads 2206307 Automatic Segmentation of 3D Tomographic Images Contours at Radiotherapy Planning in Low Cost Solution
Authors: D. F. Carvalho, A. O. Uscamayta, J. C. Guerrero, H. F. Oliveira, P. M. Azevedo-Marques
Abstract:
The creation of vector contours slices (ROIs) on body silhouettes in oncologic patients is an important step during the radiotherapy planning in clinic and hospitals to ensure the accuracy of oncologic treatment. The radiotherapy planning of patients is performed by complex softwares focused on analysis of tumor regions, protection of organs at risk (OARs) and calculation of radiation doses for anomalies (tumors). These softwares are supplied for a few manufacturers and run over sophisticated workstations with vector processing presenting a cost of approximately twenty thousand dollars. The Brazilian project SIPRAD (Radiotherapy Planning System) presents a proposal adapted to the emerging countries reality that generally does not have the monetary conditions to acquire some radiotherapy planning workstations, resulting in waiting queues for new patients treatment. The SIPRAD project is composed by a set of integrated and interoperabilities softwares that are able to execute all stages of radiotherapy planning on simple personal computers (PCs) in replace to the workstations. The goal of this work is to present an image processing technique, computationally feasible, that is able to perform an automatic contour delineation in patient body silhouettes (SIPRAD-Body). The SIPRAD-Body technique is performed in tomography slices under grayscale images, extending their use with a greedy algorithm in three dimensions. SIPRAD-Body creates an irregular polyhedron with the Canny Edge adapted algorithm without the use of preprocessing filters, as contrast and brightness. In addition, comparing the technique SIPRAD-Body with existing current solutions is reached a contours similarity at least 78%. For this comparison is used four criteria: contour area, contour length, difference between the mass centers and Jaccard index technique. SIPRAD-Body was tested in a set of oncologic exams provided by the Clinical Hospital of the University of Sao Paulo (HCRP-USP). The exams were applied in patients with different conditions of ethnology, ages, tumor severities and body regions. Even in case of services that have already workstations, it is possible to have SIPRAD working together PCs because of the interoperability of communication between both systems through the DICOM protocol that provides an increase of workflow. Therefore, the conclusion is that SIPRAD-Body technique is feasible because of its degree of similarity in both new radiotherapy planning services and existing services.Keywords: radiotherapy, image processing, DICOM RT, Treatment Planning System (TPS)
Procedia PDF Downloads 2966306 Will My Home Remain My Castle? Tenants’ Interview Topics regarding an Eco-Friendly Refurbishment Strategy in a Neighborhood in Germany
Authors: Karin Schakib-Ekbatan, Annette Roser
Abstract:
According to the Federal Government’s plans, the German building stock should be virtually climate neutral by 2050. Thus, the “EnEff.Gebäude.2050” funding initiative was launched, complementing the projects of the Energy Transition Construction research initiative. Beyond the construction and renovation of individual buildings, solutions must be found at the neighborhood level. The subject of the presented pilot project is a building ensemble from the Wilhelminian period in Munich, which is planned to be refurbished based on a socially compatible, energy-saving, innovative-technical modernization concept. The building ensemble, with about 200 apartments, is part of the building cooperative. To create an optimized network and possible synergies between researchers and projects of the funding initiative, a Scientific Accompanying Research was established for cross-project analyses of findings and results in order to identify further research needs and trends. Thus, the project is characterized by an interdisciplinary approach that combines constructional, technical, and socio-scientific expertise based on a participatory understanding of research by involving the tenants at an early stage. The research focus is on getting insights into the tenants’ comfort requirements, attitudes, and energy-related behaviour. Both qualitative and quantitative methods are applied based on the Technology-Acceptance-Model (TAM). The core of the refurbishment strategy is a wall heating system intended to replace conventional radiators. A wall heating provides comfortable and consistent radiant heat instead of convection heat, which often causes drafts and dust turbulence. Besides comfort and health, the advantage of wall heating systems is an energy-saving operation. All apartments would be supplied by a uniform basic temperature control system (around perceived room temperature of 18 °C resp. 64,4 °F), which could be adapted to individual preferences via individual heating options (e. g. infrared heating). The new heating system would affect the furnishing of the walls, in terms of not allowing the wall surface to be covered too much with cupboards or pictures. Measurements and simulations of the energy consumption of an installed wall heating system are currently being carried out in a show apartment in this neighborhood to investigate energy-related, economical aspects as well as thermal comfort. In March, interviews were conducted with a total of 12 people in 10 households. The interviews were analyzed by MAXQDA. The main issue of the interview was the fear of reduced self-efficacy within their own walls (not having sufficient individual control over the room temperature or being very limited in furnishing). Other issues concerned the impact that the construction works might have on their daily life, such as noise or dirt. Despite their basically positive attitude towards a climate-friendly refurbishment concept, tenants were very concerned about the further development of the project and they expressed a great need for information events. The results of the interviews will be used for project-internal discussions on technical and psychological aspects of the refurbishment strategy in order to design accompanying workshops with the tenants as well as to prepare a written survey involving all households of the neighbourhood.Keywords: energy efficiency, interviews, participation, refurbishment, residential buildings
Procedia PDF Downloads 1266305 A 3D Bioprinting System for Engineering Cell-Embedded Hydrogels by Digital Light Processing
Authors: Jimmy Jiun-Ming Su, Yuan-Min Lin
Abstract:
Bioprinting has been applied to produce 3D cellular constructs for tissue engineering. Microextrusion printing is the most common used method. However, printing low viscosity bioink is a challenge for this method. Herein, we developed a new 3D printing system to fabricate cell-laden hydrogels via a DLP-based projector. The bioprinter is assembled from affordable equipment including a stepper motor, screw, LED-based DLP projector, open source computer hardware and software. The system can use low viscosity and photo-polymerized bioink to fabricate 3D tissue mimics in a layer-by-layer manner. In this study, we used gelatin methylacrylate (GelMA) as bioink for stem cell encapsulation. In order to reinforce the printed construct, surface modified hydroxyapatite has been added in the bioink. We demonstrated the silanization of hydroxyapatite could improve the crosslinking between the interface of hydroxyapatite and GelMA. The results showed that the incorporation of silanized hydroxyapatite into the bioink had an enhancing effect on the mechanical properties of printed hydrogel, in addition, the hydrogel had low cytotoxicity and promoted the differentiation of embedded human bone marrow stem cells (hBMSCs) and retinal pigment epithelium (RPE) cells. Moreover, this bioprinting system has the ability to generate microchannels inside the engineered tissues to facilitate diffusion of nutrients. We believe this 3D bioprinting system has potential to fabricate various tissues for clinical applications and regenerative medicine in the future.Keywords: bioprinting, cell encapsulation, digital light processing, GelMA hydrogel
Procedia PDF Downloads 1816304 Analysis of Labor Behavior Effect on Occupational Health and Safety Management by Multiple Linear Regression
Authors: Yulinda Rizky Pratiwi, Fuji Anugrah Emily
Abstract:
Management of Occupational Safety and Health (OSH) are appropriately applied properly by all workers and pekarya in the company. K3 management application also has become very important to prevent accidents. Violation of the rules regarding the K3 has often occurred from time to time. By 2015 the number of occurrences of a violation of the K3 or so-called unsafe action tends to increase. Until finally in January 2016, the number increased drastically unsafe action. Trigger increase in the number of unsafe action is a decrease in the quality of management practices K3. While the application of K3 management performed by each individual thought to be influenced by the attitude and observation guide the actions of each of the individual. In addition to the decline in the quality of K3 management application may result in increased likelihood of accidents and losses for the company as well as the local co-workers. The big difference in the number of unsafe action is very significant in the month of January 2016, making the company Pertamina as the national oil company must do a lot of effort to keep track of how the implementation of K3 management on every worker and pekarya, one at PT Pertamina EP Cepu Field Asset IV. To consider the effort to control the implementation of K3 management can be seen from the attitude and observation guide the actions of the workers and pekarya. By using Multiple Linear Regression can be seen the influence of attitude and action observation guide workers and pekarya the K3 management application that has been done. The results showed that scores K3 management application of each worker and pekarya will increase by 0.764 if the score pekarya worker attitudes and increase one unit, whereas if the score Reassurance action guidelines and pekarya workers increased by one unit then the score management application K3 will increase by 0.754.Keywords: occupational safety and health, management of occupational safety and health, unsafe action, multiple linear regression
Procedia PDF Downloads 2306303 Dairy Products on the Algerian Market: Proportion of Imitation and Degree of Processing
Authors: Bentayeb-Ait Lounis Saïda, Cheref Zahia, Cherifi Thizi, Ri Kahina Bahmed, Kahina Hallali Yasmine Abdellaoui, Kenza Adli
Abstract:
Algeria is the leading consumer of dairy products in North Africa. This is a fact. However, the nutritional quality of the latter remains unknown. The aim of this study is to characterise the dairy products available on the Algerian market in order to assess whether they constitute a healthy and safe choice. To do this, it collected data on the labelling of 390 dairy products, including cheese, yoghurt, UHT milk and milk drinks, infant formula and dairy creams. We assessed their degree of processing according to the NOVA classification, as well as the proportion of imitation products. The study was carried out between March 2020 and August 2023. The results show that 88% are ultra-processed; 84% for 'cheese', 92% for dairy creams, 92% for 'yoghurt', 100% for infant formula, 92% for margarines and 36% for UHT milk/dairy drinks. As for imitation/analogue dairy products, the study revealed the following proportions: 100% for infant formula, 78% for butter/margarine, 18% for UHT milk/milk-based drinks, 54% for cheese, 2% for camembert and 75% for dairy cream. The harmful effects of consuming ultra-processed products on long-term health are increasingly documented in dozens of publications. The findings of this study sound the alarm about the health risks to which Algerian consumers are exposed. Various scientific, economic and industrial bodies need to be involved in order to safeguard consumer health in both the short and long term. Food awareness and education campaigns should be organised.Keywords: dairy, UPF, NOVA, yoghurt, cheese
Procedia PDF Downloads 356302 Proactive Change or Adaptive Response: A Study on the Impact of Digital Transformation Strategy Modes on Enterprise Profitability From a Configuration Perspective
Authors: Jing-Ma
Abstract:
Digital transformation (DT) is an important way for manufacturing enterprises to shape new competitive advantages, and how to choose an effective DT strategy is crucial for enterprise growth and sustainable development. Rooted in strategic change theory, this paper incorporates the dimensions of managers' digital cognition, organizational conditions, and external environment into the same strategic analysis framework and integrates the dynamic QCA method and PSM method to study the antecedent grouping of the DT strategy mode of manufacturing enterprises and its impact on corporate profitability based on the data of listed manufacturing companies in China from 2015 to 2019. We find that the synergistic linkage of different dimensional elements can form six equivalent paths of high-level DT, which can be summarized as the proactive change mode of resource-capability dominated as well as adaptive response mode such as industry-guided resource replenishment. Capacity building under complex environments, market-industry synergy-driven, forced adaptation under peer pressure, and the managers' digital cognition play a non-essential but crucial role in this process. Except for individual differences in the market industry collaborative driving mode, other modes are more stable in terms of individual and temporal changes. However, it is worth noting that not all paths that result in high levels of DT can contribute to enterprise profitability, but only high levels of DT that result from matching the optimization of internal conditions with the external environment, such as industry technology and macro policies, can have a significant positive impact on corporate profitability.Keywords: digital transformation, strategy mode, enterprise profitability, dynamic QCA, PSM approach
Procedia PDF Downloads 246301 Agile Smartphone Porting and App Integration of Signal Processing Algorithms Obtained through Rapid Development
Authors: Marvin Chibuzo Offiah, Susanne Rosenthal, Markus Borschbach
Abstract:
Certain research projects in Computer Science often involve research on existing signal processing algorithms and developing improvements on them. Research budgets are usually limited, hence there is limited time for implementing the algorithms from scratch. It is therefore common practice, to use implementations provided by other researchers as a template. These are most commonly provided in a rapid development, i.e. 4th generation, programming language, usually Matlab. Rapid development is a common method in Computer Science research for quickly implementing and testing new developed algorithms, which is also a common task within agile project organization. The growing relevance of mobile devices in the computer market also gives rise to the need to demonstrate the successful executability and performance measurement of these algorithms on a mobile device operating system and processor, particularly on a smartphone. Open mobile systems such as Android, are most suitable for this task, which is to be performed most efficiently. Furthermore, efficiently implementing an interaction between the algorithm and a graphical user interface (GUI) that runs exclusively on the mobile device is necessary in cases where the project’s goal statement also includes such a task. This paper examines different proposed solutions for porting computer algorithms obtained through rapid development into a GUI-based smartphone Android app and evaluates their feasibilities. Accordingly, the feasible methods are tested and a short success report is given for each tested method.Keywords: SMARTNAVI, Smartphone, App, Programming languages, Rapid Development, MATLAB, Octave, C/C++, Java, Android, NDK, SDK, Linux, Ubuntu, Emulation, GUI
Procedia PDF Downloads 478