Search results for: automatic reporting
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1619

Search results for: automatic reporting

179 Interconnections of Circular Economy, Circularity, and Sustainability: A Systematic Review and Conceptual Framework

Authors: Anteneh Dagnachew Sewenet, Paola Pisano

Abstract:

The concept of circular economy, circularity, and sustainability are interconnected and promote a more sustainable future. However, previous studies have mainly focused on each concept individually, neglecting the relationships and gaps in the existing literature. This study aims to integrate and link these concepts to expand the theoretical and practical methods of scholars and professionals in pursuit of sustainability. The aim of this systematic literature review is to comprehensively analyze and summarize the interconnections between circular economy, circularity, and sustainability. Additionally, it seeks to develop a conceptual framework that can guide practitioners and serve as a basis for future research. The review employed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol. A total of 78 articles were analyzed, utilizing the Scopus and Web of Science databases. The analysis involved summarizing and systematizing the conceptualizations of circularity and its relationship with the circular economy and long-term sustainability. The review provided a comprehensive overview of the interconnections between circular economy, circularity, and sustainability. Key themes, theoretical frameworks, empirical findings, and conceptual gaps in the literature were identified. Through a rigorous analysis of scholarly articles, the study highlighted the importance of integrating these concepts for a more sustainable future. This study contributes to the existing literature by integrating and linking the concepts of circular economy, circularity, and sustainability. It expands the theoretical understanding of how these concepts relate to each other and provides a conceptual framework that can guide future research in this field. The findings emphasize the need for a holistic approach in achieving sustainability goals. The data collection for this review involved identifying relevant articles from the Scopus and Web of Science databases. The selection of articles was made based on predefined inclusion and exclusion criteria. The PRISMA protocol guided the systematic analysis of the selected articles, including summarizing and systematizing their content. This study addressed the question of how circularity is conceptualized and related to both the circular economy and long-term sustainability. It aimed to identify the interconnections between these concepts and bridge the gap in the existing literature. The review provided a comprehensive analysis of the interconnections between the circular economy, circularity, and sustainability. It presented a conceptual framework that can guide practitioners in implementing circular economy strategies and serve as a basis for future research. By integrating these concepts, scholars, and professionals can enhance the theoretical and practical methods in pursuit of a more sustainable future. The findings emphasize the importance of taking a holistic approach to achieve sustainability goals and highlight conceptual gaps that can be addressed in future studies.

Keywords: circularity, circular economy, sustainability, innovation

Procedia PDF Downloads 68
178 A Strength Weaknesses Opportunities and Threats Analysis of Socialisation Externalisation Combination and Internalisation Modes in Knowledge Management Practice: A Systematic Review of Literature

Authors: Aderonke Olaitan Adesina

Abstract:

Background: The paradigm shift to knowledge, as the key to organizational innovation and competitive advantage, has made the management of knowledge resources in organizations a mandate. A key component of the knowledge management (KM) cycle is knowledge creation, which is researched to be the result of the interaction between explicit and tacit knowledge. An effective knowledge creation process requires the use of the right model. The SECI (Socialisation, Externalisation, Combination, and Internalisation) model, proposed in 1995, is attested to be a preferred model of choice for knowledge creation activities. The model has, however, been criticized by researchers, who raise their concern, especially about its sequential nature. Therefore, this paper reviews extant literature on the practical application of each mode of the SECI model, from 1995 to date, with a view to ascertaining the relevance in modern-day KM practice. The study will establish the trends of use, with regards to the location and industry of use, and the interconnectedness of the modes. The main research question is, for organizational knowledge creation activities, is the SECI model indeed linear and sequential? In other words, does the model need to be reviewed in today’s KM practice? The review will generate a compendium of the usage of the SECI modes and propose a framework of use, based on the strength weaknesses opportunities and threats (SWOT) findings of the study. Method: This study will employ the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to investigate the usage and SWOT of the modes, in order to ascertain the success, or otherwise, of the sequential application of the modes in practice from 1995 to 2019. To achieve the purpose, four databases will be explored to search for open access, peer-reviewed articles from 1995 to 2019. The year 1995 is chosen as the baseline because it was the year the first paper on the SECI model was published. The study will appraise relevant peer-reviewed articles under the search terms: SECI (or its synonym, knowledge creation theory), socialization, externalization, combination, and internalization in the title, abstract, or keywords list. This review will include only empirical studies of knowledge management initiatives in which the SECI model and its modes were used. Findings: It is expected that the study will highlight the practical relevance of each mode of the SECI model, the linearity or not of the model, the SWOT in each mode. Concluding Statement: Organisations can, from the analysis, determine the modes of emphasis for their knowledge creation activities. It is expected that the study will support decision making in the choice of the SECI model as a strategy for the management of organizational knowledge resources, and in appropriating the SECI model, or its remodeled version, as a theoretical framework in future KM research.

Keywords: combination, externalisation, internalisation, knowledge management, SECI model, socialisation

Procedia PDF Downloads 321
177 Process Safety Management Digitalization via SHEQTool based on Occupational Safety and Health Administration and Center for Chemical Process Safety, a Case Study in Petrochemical Companies

Authors: Saeed Nazari, Masoom Nazari, Ali Hejazi, Siamak Sanoobari Ghazi Jahani, Mohammad Dehghani, Javad Vakili

Abstract:

More than ever, digitization is an imperative for businesses to keep their competitive advantages, foster innovation and reduce paperwork. To design and successfully implement digital transformation initiatives within process safety management system, employees need to be equipped with the right tool, frameworks, and best practices. we developed a unique full stack application so-called SHEQTool which is entirely dynamic based on our extensive expertise, experience, and client feedback to help business processes particularly operations safety management. We use our best knowledge and scientific methodologies published by CCPS and OSHA Guidelines to streamline operations and integrated them into task management within Petrochemical Companies. We digitalize their main process safety management system elements and their sub elements such as hazard identification and risk management, training and communication, inspection and audit, critical changes management, contractor management, permit to work, pre-start-up safety review, incident reporting and investigation, emergency response plan, personal protective equipment, occupational health, and action management in a fully customizable manner with no programming needs for users. We review the feedback from main actors within petrochemical plant which highlights improving their business performance and productivity as well as keep tracking their functions’ key performance indicators (KPIs) because it; 1) saves time, resources, and costs of all paperwork on our businesses (by Digitalization); 2) reduces errors and improve performance within management system by covering most of daily software needs of the organization and reduce complexity and associated costs of numerous tools and their required training (One Tool Approach); 3) focuses on management systems and integrate functions and put them into traceable task management (RASCI and Flowcharting); 4) helps the entire enterprise be resilient to any change of your processes, technologies, assets with minimum costs (through Organizational Resilience); 5) reduces significantly incidents and errors via world class safety management programs and elements (by Simplification); 6) gives the companies a systematic, traceable, risk based, process based, and science based integrated management system (via proper Methodologies); 7) helps business processes complies with ISO 9001, ISO 14001, ISO 45001, ISO 31000, best practices as well as legal regulations by PDCA approach (Compliance).

Keywords: process, safety, digitalization, management, risk, incident, SHEQTool, OSHA, CCPS

Procedia PDF Downloads 27
176 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging

Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen

Abstract:

Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.

Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques

Procedia PDF Downloads 75
175 Students’ Speech Anxiety in Blended Learning

Authors: Mary Jane B. Suarez

Abstract:

Public speaking anxiety (PSA), also known as speech anxiety, is innumerably persistent in any traditional communication classes, especially for students who learn English as a second language. The speech anxiety intensifies when communication skills assessments have taken their toll in an online or a remote mode of learning due to the perils of the COVID-19 virus. Both teachers and students have experienced vast ambiguity on how to realize a still effective way to teach and learn speaking skills amidst the pandemic. Communication skills assessments like public speaking, oral presentations, and student reporting have defined their new meaning using Google Meet, Zoom, and other online platforms. Though using such technologies has paved for more creative ways for students to acquire and develop communication skills, the effectiveness of using such assessment tools stands in question. This mixed method study aimed to determine the factors that affected the public speaking skills of students in a communication class, to probe on the assessment gaps in assessing speaking skills of students attending online classes vis-à-vis the implementation of remote and blended modalities of learning, and to recommend ways on how to address the public speaking anxieties of students in performing a speaking task online and to bridge the assessment gaps based on the outcome of the study in order to achieve a smooth segue from online to on-ground instructions maneuvering towards a much better post-pandemic academic milieu. Using a convergent parallel design, both quantitative and qualitative data were reconciled by probing on the public speaking anxiety of students and the potential assessment gaps encountered in an online English communication class under remote and blended learning. There were four phases in applying the convergent parallel design. The first phase was the data collection, where both quantitative and qualitative data were collected using document reviews and focus group discussions. The second phase was data analysis, where quantitative data was treated using statistical testing, particularly frequency, percentage, and mean by using Microsoft Excel application and IBM Statistical Package for Social Sciences (SPSS) version 19, and qualitative data was examined using thematic analysis. The third phase was the merging of data analysis results to amalgamate varying comparisons between desired learning competencies versus the actual learning competencies of students. Finally, the fourth phase was the interpretation of merged data that led to the findings that there was a significantly high percentage of students' public speaking anxiety whenever students would deliver speaking tasks online. There were also assessment gaps identified by comparing the desired learning competencies of the formative and alternative assessments implemented and the actual speaking performances of students that showed evidence that public speaking anxiety of students was not properly identified and processed.

Keywords: blended learning, communication skills assessment, public speaking anxiety, speech anxiety

Procedia PDF Downloads 80
174 Social Media Data Analysis for Personality Modelling and Learning Styles Prediction Using Educational Data Mining

Authors: Srushti Patil, Preethi Baligar, Gopalkrishna Joshi, Gururaj N. Bhadri

Abstract:

In designing learning environments, the instructional strategies can be tailored to suit the learning style of an individual to ensure effective learning. In this study, the information shared on social media like Facebook is being used to predict learning style of a learner. Previous research studies have shown that Facebook data can be used to predict user personality. Users with a particular personality exhibit an inherent pattern in their digital footprint on Facebook. The proposed work aims to correlate the user's’ personality, predicted from Facebook data to the learning styles, predicted through questionnaires. For Millennial learners, Facebook has become a primary means for information sharing and interaction with peers. Thus, it can serve as a rich bed for research and direct the design of learning environments. The authors have conducted this study in an undergraduate freshman engineering course. Data from 320 freshmen Facebook users was collected. The same users also participated in the learning style and personality prediction survey. The Kolb’s Learning style questionnaires and Big 5 personality Inventory were adopted for the survey. The users have agreed to participate in this research and have signed individual consent forms. A specific page was created on Facebook to collect user data like personal details, status updates, comments, demographic characteristics and egocentric network parameters. This data was captured by an application created using Python program. The data captured from Facebook was subjected to text analysis process using the Linguistic Inquiry and Word Count dictionary. An analysis of the data collected from the questionnaires performed reveals individual student personality and learning style. The results obtained from analysis of Facebook, learning style and personality data were then fed into an automatic classifier that was trained by using the data mining techniques like Rule-based classifiers and Decision trees. This helps to predict the user personality and learning styles by analysing the common patterns. Rule-based classifiers applied for text analysis helps to categorize Facebook data into positive, negative and neutral. There were totally two models trained, one to predict the personality from Facebook data; another one to predict the learning styles from the personalities. The results show that the classifier model has high accuracy which makes the proposed method to be a reliable one for predicting the user personality and learning styles.

Keywords: educational data mining, Facebook, learning styles, personality traits

Procedia PDF Downloads 201
173 Tourism Policy Challenges in Post-Soviet Georgia

Authors: Merab Khokhobaia

Abstract:

The research of Georgian tourism policy challenges is important, as the tourism can play an increasing role for the economic growth and improvement of standard of living of the country even with scanty resources, at the expense of improved creative approaches. It is also important to make correct decisions at macroeconomic level, which will be accordingly reflected in the successful functioning of the travel companies and finally, in the improvement of economic indicators of the country. In order to correctly orient sectoral policy, it is important to precisely determine its role in the economy. Development of travel industry has been considered as one of the priorities in Georgia; the country has unique cultural heritage and traditions, as well as plenty of natural resources, which are a significant precondition for the development of tourism. Despite the factors mentioned above, the existing resources are not completely utilized and exploited. This work represents a study of subjective, as well as objective reasons of ineffective functioning of the sector. During the years of transformation experienced by Georgia, the role of travel industry in economic development of the country represented the subject of continual discussions. Such assessments were often biased and they did not rest on specific calculations. This topic became especially popular on the ground of market economy, because reliable statistical data have a particular significance in the designing of tourism policy. In order to deeply study the aforementioned issue, this paper analyzes monetary, as well as non-monetary indicators. The research widely included the tourism indicators system; we analyzed the flaws in reporting of the results of tourism sector in Georgia. Existing defects are identified and recommendations for their improvement are offered. For stable development tourism, similarly to other economic sectors, needs a well-designed policy from the perspective of national, as well as local, regional development. The tourism policy must be drawn up in order to efficiently achieve our goals, which were established in short-term and long-term dynamics on the national or regional scale of specific country. The article focuses on the role and responsibility of the state institutes in planning and implementation of the tourism policy. The government has various tools and levers, which may positively influence the processes. These levers are especially important in terms of international, as well as internal tourism development. Within the framework of this research, the regulatory documents, which are in force in relation to this industry, were also analyzed. The main attention is turned to their modernization and necessity of their compliance with European standards. It is a current issue to direct the efforts of state policy on support of business by implementing infrastructural projects, as well as by development of human resources, which may be possible by supporting the relevant higher and vocational studying-educational programs.

Keywords: regional development, tourism industry, tourism policy, transition

Procedia PDF Downloads 241
172 Biofilm Text Classifiers Developed Using Natural Language Processing and Unsupervised Learning Approach

Authors: Kanika Gupta, Ashok Kumar

Abstract:

Biofilms are dense, highly hydrated cell clusters that are irreversibly attached to a substratum, to an interface or to each other, and are embedded in a self-produced gelatinous matrix composed of extracellular polymeric substances. Research in biofilm field has become very significant, as biofilm has shown high mechanical resilience and resistance to antibiotic treatment and constituted as a significant problem in both healthcare and other industry related to microorganisms. The massive information both stated and hidden in the biofilm literature are growing exponentially therefore it is not possible for researchers and practitioners to automatically extract and relate information from different written resources. So, the current work proposes and discusses the use of text mining techniques for the extraction of information from biofilm literature corpora containing 34306 documents. It is very difficult and expensive to obtain annotated material for biomedical literature as the literature is unstructured i.e. free-text. Therefore, we considered unsupervised approach, where no annotated training is necessary and using this approach we developed a system that will classify the text on the basis of growth and development, drug effects, radiation effects, classification and physiology of biofilms. For this, a two-step structure was used where the first step is to extract keywords from the biofilm literature using a metathesaurus and standard natural language processing tools like Rapid Miner_v5.3 and the second step is to discover relations between the genes extracted from the whole set of biofilm literature using pubmed.mineR_v1.0.11. We used unsupervised approach, which is the machine learning task of inferring a function to describe hidden structure from 'unlabeled' data, in the above-extracted datasets to develop classifiers using WinPython-64 bit_v3.5.4.0Qt5 and R studio_v0.99.467 packages which will automatically classify the text by using the mentioned sets. The developed classifiers were tested on a large data set of biofilm literature which showed that the unsupervised approach proposed is promising as well as suited for a semi-automatic labeling of the extracted relations. The entire information was stored in the relational database which was hosted locally on the server. The generated biofilm vocabulary and genes relations will be significant for researchers dealing with biofilm research, making their search easy and efficient as the keywords and genes could be directly mapped with the documents used for database development.

Keywords: biofilms literature, classifiers development, text mining, unsupervised learning approach, unstructured data, relational database

Procedia PDF Downloads 144
171 Identification of Damage Mechanisms in Interlock Reinforced Composites Using a Pattern Recognition Approach of Acoustic Emission Data

Authors: M. Kharrat, G. Moreau, Z. Aboura

Abstract:

The latest advances in the weaving industry, combined with increasingly sophisticated means of materials processing, have made it possible to produce complex 3D composite structures. Mainly used in aeronautics, composite materials with 3D architecture offer better mechanical properties than 2D reinforced composites. Nevertheless, these materials require a good understanding of their behavior. Because of the complexity of such materials, the damage mechanisms are multiple, and the scenario of their appearance and evolution depends on the nature of the exerted solicitations. The AE technique is a well-established tool for discriminating between the damage mechanisms. Suitable sensors are used during the mechanical test to monitor the structural health of the material. Relevant AE-features are then extracted from the recorded signals, followed by a data analysis using pattern recognition techniques. In order to better understand the damage scenarios of interlock composite materials, a multi-instrumentation was set-up in this work for tracking damage initiation and development, especially in the vicinity of the first significant damage, called macro-damage. The deployed instrumentation includes video-microscopy, Digital Image Correlation, Acoustic Emission (AE) and micro-tomography. In this study, a multi-variable AE data analysis approach was developed for the discrimination between the different signal classes representing the different emission sources during testing. An unsupervised classification technique was adopted to perform AE data clustering without a priori knowledge. The multi-instrumentation and the clustered data served to label the different signal families and to build a learning database. This latter is useful to construct a supervised classifier that can be used for automatic recognition of the AE signals. Several materials with different ingredients were tested under various solicitations in order to feed and enrich the learning database. The methodology presented in this work was useful to refine the damage threshold for the new generation materials. The damage mechanisms around this threshold were highlighted. The obtained signal classes were assigned to the different mechanisms. The isolation of a 'noise' class makes it possible to discriminate between the signals emitted by damages without resorting to spatial filtering or increasing the AE detection threshold. The approach was validated on different material configurations. For the same material and the same type of solicitation, the identified classes are reproducible and little disturbed. The supervised classifier constructed based on the learning database was able to predict the labels of the classified signals.

Keywords: acoustic emission, classifier, damage mechanisms, first damage threshold, interlock composite materials, pattern recognition

Procedia PDF Downloads 135
170 Affects Associations Analysis in Emergency Situations

Authors: Joanna Grzybowska, Magdalena Igras, Mariusz Ziółko

Abstract:

Association rule learning is an approach for discovering interesting relationships in large databases. The analysis of relations, invisible at first glance, is a source of new knowledge which can be subsequently used for prediction. We used this data mining technique (which is an automatic and objective method) to learn about interesting affects associations in a corpus of emergency phone calls. We also made an attempt to match revealed rules with their possible situational context. The corpus was collected and subjectively annotated by two researchers. Each of 3306 recordings contains information on emotion: (1) type (sadness, weariness, anxiety, surprise, stress, anger, frustration, calm, relief, compassion, contentment, amusement, joy) (2) valence (negative, neutral, or positive) (3) intensity (low, typical, alternating, high). Also, additional information, that is a clue to speaker’s emotional state, was annotated: speech rate (slow, normal, fast), characteristic vocabulary (filled pauses, repeated words) and conversation style (normal, chaotic). Exponentially many rules can be extracted from a set of items (an item is a previously annotated single information). To generate the rules in the form of an implication X → Y (where X and Y are frequent k-itemsets) the Apriori algorithm was used - it avoids performing needless computations. Then, two basic measures (Support and Confidence) and several additional symmetric and asymmetric objective measures (e.g. Laplace, Conviction, Interest Factor, Cosine, correlation coefficient) were calculated for each rule. Each applied interestingness measure revealed different rules - we selected some top rules for each measure. Owing to the specificity of the corpus (emergency situations), most of the strong rules contain only negative emotions. There are though strong rules including neutral or even positive emotions. Three examples of the strongest rules are: {sadness} → {anxiety}; {sadness, weariness, stress, frustration} → {anger}; {compassion} → {sadness}. Association rule learning revealed the strongest configurations of affects (as well as configurations of affects with affect-related information) in our emergency phone calls corpus. The acquired knowledge can be used for prediction to fulfill the emotional profile of a new caller. Furthermore, a rule-related possible context analysis may be a clue to the situation a caller is in.

Keywords: data mining, emergency phone calls, emotional profiles, rules

Procedia PDF Downloads 391
169 Discourse Analysis: Where Cognition Meets Communication

Authors: Iryna Biskub

Abstract:

The interdisciplinary approach to modern linguistic studies is exemplified by the merge of various research methods, which sometimes causes complications related to the verification of the research results. This methodological confusion can be resolved by means of creating new techniques of linguistic analysis combining several scientific paradigms. Modern linguistics has developed really productive and efficient methods for the investigation of cognitive and communicative phenomena of which language is the central issue. In the field of discourse studies, one of the best examples of research methods is the method of Critical Discourse Analysis (CDA). CDA can be viewed both as a method of investigation, as well as a critical multidisciplinary perspective. In CDA the position of the scholar is crucial from the point of view exemplifying his or her social and political convictions. The generally accepted approach to obtaining scientifically reliable results is to use a special well-defined scientific method for researching special types of language phenomena: cognitive methods applied to the exploration of cognitive aspects of language, whereas communicative methods are thought to be relevant only for the investigation of communicative nature of language. In the recent decades discourse as a sociocultural phenomenon has been the focus of careful linguistic research. The very concept of discourse represents an integral unity of cognitive and communicative aspects of human verbal activity. Since a human being is never able to discriminate between cognitive and communicative planes of discourse communication, it doesn’t make much sense to apply cognitive and communicative methods of research taken in isolation. It is possible to modify the classical CDA procedure by means of mapping human cognitive procedures onto the strategic communicative planning of discourse communication. The analysis of the electronic petition 'Block Donald J Trump from UK entry. The signatories believe Donald J Trump should be banned from UK entry' (584, 459 signatures) and the parliamentary debates on it has demonstrated the ability to map cognitive and communicative levels in the following way: the strategy of discourse modeling (communicative level) overlaps with the extraction of semantic macrostructures (cognitive level); the strategy of discourse management overlaps with the analysis of local meanings in discourse communication; the strategy of cognitive monitoring of the discourse overlaps with the formation of attitudes and ideologies at the cognitive level. Thus, the experimental data have shown that it is possible to develop a new complex methodology of discourse analysis, where cognition would meet communication, both metaphorically and literally. The same approach may appear to be productive for the creation of computational models of human-computer interaction, where the automatic generation of a particular type of a discourse could be based on the rules of strategic planning involving cognitive models of CDA.

Keywords: cognition, communication, discourse, strategy

Procedia PDF Downloads 228
168 Just Child Protection Practice for Immigrant and Racialized Families in Multicultural Western Settings: Considerations for Context and Culture

Authors: Sarah Maiter

Abstract:

Heightened globalization, migration, displacement of citizens, and refugee needs is putting increasing demand for approaches to social services for diverse populations that responds to families to ensure the safety and protection of vulnerable members while providing supports and services. Along with this social works re-focus on socially just approaches to practice increasingly asks social workers to consider the challenging circumstances of families when providing services rather than a focus on individual shortcomings alone. Child protection workers then struggle to ensure safety of children while assessing the needs of families. This assessment can prove to be difficult when providing services to immigrant, refugee, and racially diverse families as understanding of and familiarity with these families is often limited. Furthermore, child protection intervention in western countries is state mandated having legal authority when intervening in the lives of families where child protection concerns have been identified. Within this context, racialized immigrant and refugee families are at risk of misunderstandings that can result in interventions that are overly intrusive, unhelpful, and harsh. Research shows disproportionality and overrepresentation of racial and ethnic minorities, and immigrant families in the child protection system. Reasons noted include: a) possibilities of racial bias in reporting and substantiating abuse, b) struggles on the part of workers when working with families from diverse ethno-racial backgrounds and who are immigrants and may have limited proficiency in the national language of the country, c) interventions during crisis and differential ongoing services for these families, d) diverse contexts of these families that poses additional challenges for families and children, and e) possible differential definitions of child maltreatment. While cultural and ethnic diversity in child rearing approaches have been cited as contributors to child protection concerns, this approach should be viewed cautiously as it can result in stereotyping and generalizing that then results in inappropriate assessment and intervention. However, poverty and the lack of social supports, both well-known contributors to child protection concerns, also impact these families disproportionately. Child protection systems, therefore, need to continue to examine policy and practice approaches with these families that ensures safety of children while balancing the needs of families. This presentation provides data from several research studies that examined definitions of child maltreatment among a sample of racialized immigrant families, experiences of a sample of immigrant families with the child protection system, concerns of a sample of child protection workers in the provision of services to these families, and struggles of families in the transitions to their new country. These studies, along with others provide insights into areas of consideration for practice that can contribute to safety for children while ensuring just and equitable responses that have greater potential for keeping families together rather than premature apprehension and removal of children to state care.

Keywords: child protection, child welfare services, immigrant families, racial and ethnic diversity

Procedia PDF Downloads 267
167 Domestic Violence Against Women (With Special Reference to India): A Human Rights Issue

Authors: N. B. Chandrakala

Abstract:

Domestic violence is one of the most under-reported crimes. Problem with domestic violence is that it is not even considered as abuse in many parts of the world especially certain parts of Asia, Africa and Middle East. It is viewed as “doing the needful”. Domestic violence could be in form of emotional harassment, physical injury or psychological abuse perpetrated by one of the family members to another. It is a worldwide phenomenon mainly targeting women. The acts of violence have terrible negative impact on women. It is also an infringement of women’s rights and can be safely termed as human rights abuse. In cases pertaining to domestic violence, male adults often misuses his authority and power to control another using physical or psychological means. Violence and other forms of abuse are common in domestic violence. Sexual assaults, molestation and battering are common in these cases. Domestic violence is a human rights issue and a serious deterrent to development. Domestic violence could also take place in subtle forms like making the person feel worthless or not giving the victims any personal space or freedom. The problematic aspect is cases of domestic violence are very rarely reported. The majority of the victims are women but children are also made to suffer silently. They are abused and neglected. Their innocent minds are adversely affected with the incidents of domestic violence. According to a report by World Health Organization (WHO), sexual trafficking, female feticide, dowry death, public humiliation and physical torture are some of the most common forms of domestic violence against Indian women. Such acts belie our growth and claim as an economic superpower. It is ironic that we claim to be one of the most rapidly advancing countries in the world and yet we have done hardly anything of note against social hazards like domestic violence. Laws are not that stringent when it comes to reporting acts of domestic violence. Even if the report is filed it turns out to be a long drawn process and not every victim has that much resource to fight till the end. It is also a social taboo to make your family matters public. The big challenge in front now is to enforce it in true sense. Steps that are actually needed; tough laws against domestic violence, speedy execution and change in the mindset of society only then we can expect to have some improvement in such inhuman cases. An effective response to violence must be multi-sectoral; addressing the immediate practical needs of women experiencing abuse; providing long-term follow up and assistance; and focusing on changing those cultural norms, attitudes and legal provisions that promote the acceptance of and even encourage violence against women, and undermine women's enjoyment of their full human rights and freedoms. Hence the responses to the problem must be based on integrated approach. The effectiveness of measures and initiatives will depend on coherence and coordination associated with their design and implementation.

Keywords: domestic violence, human rights, sexual assaults, World Health Organization

Procedia PDF Downloads 518
166 Tax Administration Constraints: The Case of Small and Medium Size Enterprises in Addis Ababa, Ethiopia

Authors: Zeleke Ayalew Alemu

Abstract:

This study aims to investigate tax administration constraints in Addis Ababa with a focus on small and medium-sized enterprises by identifying issues and constraints in tax administration and assessment. The study identifies problems associated with taxpayers and tax-collecting authorities in the city. The research used qualitative and quantitative research designs and employed questionnaires, focus group discussion and key informant interviews for primary data collection and also used secondary data from different sources. The study identified many constraints that taxpayers are facing. Among others, tax administration offices’ inefficiency, reluctance to respond to taxpayers’ questions, limited tax assessment and administration knowledge and skills, and corruption and unethical practices are the major ones. Besides, the tax laws and regulations are complex and not enforced equally and fully on all taxpayers, causing a prevalence of business entities not paying taxes. This apparently results in an uneven playing field. Consequently, the tax system at present is neither fair nor transparent and increases compliance costs. In case of dispute, the appeal process is excessively long and the tax authority’s decision is irreversible. The Value Added Tax (VAT) administration and compliance system is not well designed, and VAT has created economic distortion among VAT-registered and non-registered taxpayers. Cash registration machine administration and the reporting system are big headaches for taxpayers. With regard to taxpayers, there is a lack of awareness of tax laws and documentation. Based on the above and other findings, the study forwarded recommendations, such as, ensuring fairness and transparency in tax collection and administration, enhancing the efficiency of tax authorities by use of modern technologies and upgrading human resources, conducting extensive awareness creation programs, and enforcing tax laws in a fair and equitable manner. The objective of this study is to assess problems, weaknesses and limitations of small and medium-sized enterprise taxpayers, tax authority administrations, and laws as sources of inefficiency and dissatisfaction to forward recommendations that bring about efficient, fair and transparent tax administration. The entire study has been conducted in a participatory and process-oriented manner by involving all partners and stakeholders at all levels. Accordingly, the researcher used participatory assessment methods in generating both secondary and primary data as well as both qualitative and quantitative data on the field. The research team held FGDs with 21 people from Addis Ababa City Administration tax offices and selected medium and small taxpayers. The study team also interviewed 10 KIIs selected from the various segments of stakeholders. The lead, along with research assistants, handled the KIIs using a predesigned semi-structured questionnaire.

Keywords: taxation, tax system, tax administration, small and medium enterprises

Procedia PDF Downloads 46
165 [Keynote Talk]: Monitoring of Ultrafine Particle Number and Size Distribution at One Urban Background Site in Leicester

Authors: Sarkawt M. Hama, Paul S. Monks, Rebecca L. Cordell

Abstract:

Within the Joaquin project, ultrafine particles (UFP) are continuously measured at one urban background site in Leicester. The main aims are to examine the temporal and seasonal variations in UFP number concentration and size distribution in an urban environment, and to try to assess the added value of continuous UFP measurements. In addition, relations of UFP with more commonly monitored pollutants such as black carbon (BC), nitrogen oxides (NOX), particulate matter (PM2.5), and the lung deposited surface area(LDSA) were evaluated. The effects of meteorological conditions, particularly wind speed and direction, and also temperature on the observed distribution of ultrafine particles will be detailed. The study presents the results from an experimental investigation into the particle number concentration size distribution of UFP, BC, and NOX with measurements taken at the Automatic Urban and Rural Network (AURN) monitoring site in Leicester. The monitoring was performed as part of the EU project JOAQUIN (Joint Air Quality Initiative) supported by the INTERREG IVB NWE program. The total number concentrations (TNC) were measured by a water-based condensation particle counter (W-CPC) (TSI model 3783), the particle number concentrations (PNC) and size distributions were measured by an ultrafine particle monitor (UFP TSI model 3031), the BC by MAAP (Thermo-5012), the NOX by NO-NO2-NOx monitor (Thermos Scientific 42i), and a Nanoparticle Surface Area Monitor (NSAM, TSI 3550) was used to measure the LDSA (reported as μm2 cm−3) corresponding to the alveolar region of the lung between November 2013 and November 2015. The average concentrations of particle number concentrations were observed in summer with lower absolute values of PNC than in winter might be related mainly to particles directly emitted by traffic and to the more favorable conditions of atmospheric dispersion. Results showed a traffic-related diurnal variation of UFP, BC, NOX and LDSA with clear morning and evening rush hour peaks on weekdays, only an evening peak at the weekends. Correlation coefficients were calculated between UFP and other pollutants (BC and NOX). The highest correlation between them was found in winter months. Overall, the results support the notion that local traffic emissions were a major contributor of the atmospheric particles pollution and a clear seasonal pattern was found, with higher values during the cold season.

Keywords: size distribution, traffic emissions, UFP, urban area

Procedia PDF Downloads 310
164 Women’s Experience of Managing Pre-Existing Lymphoedema during Pregnancy and the Early Postnatal Period

Authors: Kim Toyer, Belinda Thompson, Louise Koelmeyer

Abstract:

Lymphoedema is a chronic condition caused by dysfunction of the lymphatic system, which limits the drainage of fluid and tissue waste from the interstitial space of the affected body part. The normal physiological changes in pregnancy cause an increased load on a normal lymphatic system which can result in a transient lymphatic overload (oedema). The interaction between lymphoedema and pregnancy oedema is unclear. Women with pre-existing lymphoedema require accurate information and additional strategies to manage their lymphoedema during pregnancy. Currently, no resources are available to guide women or their healthcare providers with accurate advice and additional management strategies for coping with lymphoedema during pregnancy until they have recovered postnatally. This study explored the experiences of Australian women with pre-existing lymphoedema during recent pregnancy and the early postnatal period to determine how their usual lymphoedema management strategies were adapted and what were their additional or unmet needs. Interactions with their obstetric care providers, the hospital maternity services, and usual lymphoedema therapy services were detailed. Participants were sourced from several Australian lymphoedema community groups, including therapist networks. Opportunistic sampling is appropriate to explore this topic in a small target population as lymphoedema in women of childbearing age is uncommon, with prevalence data unavailable. Inclusion criteria were aged over 18 years, diagnosed with primary or secondary lymphoedema of the arm or leg, pregnant within the preceding ten years (since 2012), and had their pregnancy and postnatal care in Australia. Exclusion criteria were a diagnosis of lipedema and if unable to read or understand a reasonable level of English. A mixed-method qualitative design was used in two phases. This involved an online survey (REDCap platform) of the participants followed by online semi-structured interviews or focus groups to provide the transcript data for inductive thematic analysis to gain an in-depth understanding of issues raised. Women with well-managed pre-existing lymphoedema coped well with the additional oedema load of pregnancy; however, those with limited access to quality conservative care prior to pregnancy were found to be significantly impacted by pregnancy, including many reporting deterioration of their chronic lymphoedema. Misinformation and a lack of support increased fear and apprehension in planning and enjoying their pregnancy experience. Collaboration between maternity and lymphoedema therapy services did not happen despite study participants suggesting it. Helpful resources and unmet needs were identified in the recent Australian context to inform further research and the development of resources to assist women with lymphoedema who are considering or are pregnant and their supporters, including health care providers.

Keywords: lymphoedema, management strategies, pregnancy, qualitative

Procedia PDF Downloads 55
163 Online Course of Study and Job Crafting for University Students: Development Work and Feedback

Authors: Hannele Kuusisto, Paivi Makila, Ursula Hyrkkanen

Abstract:

Introduction: There have been arguments about the skills university students should have when graduated. Current trends argue that as well as the specific job-related skills the graduated students need problem-solving, interaction and networking skills as well as self-management skills. Skills required in working life are also considered in the Finnish national project called VALTE (short for 'prepared for working life'). The project involves 11 Finnish school organizations. As one result of this project, a five-credit independent online course in study and job engagement as well as in study and job crafting was developed at Turku University of Applied Sciences. The aim of the oral or e-poster presentation is to present the online course developed in the project. The purpose of this abstract is to present the development work of the online course and the feedback received from the pilots. Method: As the University of Turku is the leading partner of the VALTE project, the collaborative education platform ViLLE (https://ville.utu.fi, developed by the University of Turku) was chosen as the online platform for the course. Various exercise types with automatic assessment were used; for example, quizzes, multiple-choice questions, classification exercises, gap filling exercises, model answer questions, self-assessment tasks, case tasks, and collaboration in Padlet. In addition, the free material and free platforms on the Internet were used (Youtube, Padlet, Todaysmeet, and Prezi) as well as the net-based questionnaires about the study engagement and study crafting (made with Webropol). Three teachers with long teaching experience (also with job crafting and online pedagogy) and three students working as trainees in the project developed the content of the course. The online course was piloted twice in 2017 as an elective course for the students at Turku University of Applied Sciences, a higher education institution of about 10 000 students. After both pilots, feedback from the students was gathered and the online course was developed. Results: As the result, the functional five-credit independent online course suitable for students of different educational institutions was developed. The student feedback shows that students themselves think that the developed online course really enhanced their job and study crafting skills. After the course, 91% of the students considered their knowledge in job and study engagement as well as in job and study crafting to be at a good or excellent level. About two-thirds of the students were going to exploit their knowledge significantly in the future. Students appreciated the variability and the game-like feeling of the exercises as well as the opportunity to study online at the time and place they chose themselves. On a five-point scale (1 being poor and 5 being excellent), the students graded the clarity of the ViLLE platform as 4.2, the functionality of the platform as 4.0 and the easiness of operating as 3.9.

Keywords: job crafting, job engagement, online course, study crafting, study engagement

Procedia PDF Downloads 133
162 Development and Application of an Intelligent Masonry Modulation in BIM Tools: Literature Review

Authors: Sara A. Ben Lashihar

Abstract:

The heritage building information modelling (HBIM) of the historical masonry buildings has expanded lately to meet the urgent needs for conservation and structural analysis. The masonry structures are unique features for ancient building architectures worldwide that have special cultural, spiritual, and historical significance. However, there is a research gap regarding the reliability of the HBIM modeling process of these structures. The HBIM modeling process of the masonry structures faces significant challenges due to the inherent complexity and uniqueness of their structural systems. Most of these processes are based on tracing the point clouds and rarely follow documents, archival records, or direct observation. The results of these techniques are highly abstracted models where the accuracy does not exceed LOD 200. The masonry assemblages, especially curved elements such as arches, vaults, and domes, are generally modeled with standard BIM components or in-place models, and the brick textures are graphically input. Hence, future investigation is necessary to establish a methodology to generate automatically parametric masonry components. These components are developed algorithmically according to mathematical and geometric accuracy and the validity of the survey data. The main aim of this paper is to provide a comprehensive review of the state of the art of the existing researches and papers that have been conducted on the HBIM modeling of the masonry structural elements and the latest approaches to achieve parametric models that have both the visual fidelity and high geometric accuracy. The paper reviewed more than 800 articles, proceedings papers, and book chapters focused on "HBIM and Masonry" keywords from 2017 to 2021. The studies were downloaded from well-known, trusted bibliographic databases such as Web of Science, Scopus, Dimensions, and Lens. As a starting point, a scientometric analysis was carried out using VOSViewer software. This software extracts the main keywords in these studies to retrieve the relevant works. It also calculates the strength of the relationships between these keywords. Subsequently, an in-depth qualitative review followed the studies with the highest frequency of occurrence and the strongest links with the topic, according to the VOSViewer's results. The qualitative review focused on the latest approaches and the future suggestions proposed in these researches. The findings of this paper can serve as a valuable reference for researchers, and BIM specialists, to make more accurate and reliable HBIM models for historic masonry buildings.

Keywords: HBIM, masonry, structure, modeling, automatic, approach, parametric

Procedia PDF Downloads 143
161 Stress, Anxiety and Its Associated Factors Within the Transgender Population of Delhi: A Cross-Sectional Study

Authors: Annie Singh, Ishaan Singh

Abstract:

Background: Transgenders are people who have a gender identity different from their sex assigned at birth. Their gender behaviour doesn’t match their body anatomy. The community faces discrimination due to their gender identity all across the world. The term transgender is an umbrella term for many people non-conformal to their biological identity; note that the term transgender is different from gender dysphoria, which is a DSM-5 disorder defined as problems faced by an individual due to their non-conforming gender identity. Transgender people have been a part of Indian culture for ages yet have continued to face exclusion and discrimination in society. This has led to the low socio-economic status of the community. Various studies done across the world have established the role of discrimination, harassment and exclusion in the development of psychological disorders. The study is aimed to assess the frequency of stress and anxiety in the transgender population and understand the various factors affecting the same. Methodology: A cross-sectional survey of self consenting transgender individuals above the age of 18 residing in Delhi was done to assess their socioeconomic status and experiential ecology. Recruitment of participants was done with the help of NGOs. The survey was constructed GAD-7 and PSS-10, two well-known scales were used to assess the stress and anxiety levels. Medians, means and ranges are used for reporting continuous data wherever required, while frequencies and percentages are used for categorical data. For associations and comparison between groups in categorical data, the Chi-square test was used, while the Kruskal-Wallis H test was employed for associations involving multiple ordinal groups. SPSS v28.0 was used to perform the statistical analysis for this study. Results: The survey showed that the frequency of stress and anxiety is high in the transgender population. A demographic survey indicates a low socio-economic background. 44% of participants reported facing discrimination on a daily basis; the frequency of discrimination is higher in transwomen than in transmen. Stress and anxiety levels are similar among both transmen and transwomen. Only 34.5% of participants said they had receptive family or friends. The majority of participants (72.7%) reported a positive or neutral experience with healthcare workers. The prevalence of discrimination is significantly lower in the higher educated groups. Analysis of data shows a positive impact of acceptance and reception on mental health, while discrimination is correlated with higher levels of stress and anxiety. Conclusion: The prevalence of widespread transphobia and discrimination faced by the transgender community has culminated in high levels of stress and anxiety in the transgender population and shows variance according to multiple socio-demographic factors. Educating people about the LGBT community formation of support groups, policies and laws are required to establish trust and promote integration.

Keywords: transgender, gender, stress, anxiety, mental health, discrimination, exclusion

Procedia PDF Downloads 92
160 An Informative Marketing Platform: Methodology and Architecture

Authors: Martina Marinelli, Samanta Vellante, Francesco Pilotti, Daniele Di Valerio, Gaetanino Paolone

Abstract:

Any development in web marketing technology requires changes in information engineering to identify instruments and techniques suitable for the production of software applications for informative marketing. Moreover, for large web solutions, designing an interface that enables human interactions is a complex process that must bridge between informative marketing requirements and the developed solution. A user-friendly interface in web marketing applications is crucial for a successful business. The paper introduces mkInfo - a software platform that implements informative marketing. Informative marketing is a new interpretation of marketing which places the information at the center of every marketing action. The creative team includes software engineering researchers who have recently authored an article on automatic code generation. The authors have created the mkInfo software platform to generate informative marketing web applications. For each web application, it is possible to automatically implement an opt in page, a landing page, a sales page, and a thank you page: one only needs to insert the content. mkInfo implements an autoresponder to send mail according to a predetermined schedule. The mkInfo platform also includes e-commerce for a product or service. The stakeholder can access any opt-in page and get basic information about a product or service. If he wants to know more, he will need to provide an e-mail address to access a landing page that will generate an e-mail sequence. It will provide him with complete information about the product or the service. From this point on, the stakeholder becomes a user and is now able to purchase the product or related services through the mkInfo platform. This paper suggests a possible definition for Informative Marketing, illustrates its basic principles, and finally details the mkInfo platform that implements it. This paper also offers some Informative Marketing models, which are implemented in the mkInfo platform. Informative marketing can be applied to products or services. It is necessary to realize a web application for each product or service. The mkInfo platform enables the product or the service producer to send information concerning a specific product or service to all stakeholders. In conclusion, the technical contributions of this paper are: a different interpretation of marketing based on information; a modular architecture for web applications, particularly for one with standard features such as information storage, exchange, and delivery; multiple models to implement informative marketing; a software platform enabling the implementation of such models in a web application. Future research aims to enable stakeholders to provide information about a product or a service so that the information gathered about a product or a service includes both the producer’s and the stakeholders' point of view. The purpose is to create an all-inclusive management system of the knowledge regarding a specific product or service: a system that includes everything about the product or service and is able to address even unexpected questions.

Keywords: informative marketing, opt in page, software platform, web application

Procedia PDF Downloads 107
159 Dose Saving and Image Quality Evaluation for Computed Tomography Head Scanning with Eye Protection

Authors: Yuan-Hao Lee, Chia-Wei Lee, Ming-Fang Lin, Tzu-Huei Wu, Chih-Hsiang Ko, Wing P. Chan

Abstract:

Computed tomography (CT) scan of the head is a good method for investigating cranial lesions. However, radiation-induced oxidative stress can be accumulated in the eyes and promote carcinogenesis and cataract. In this regard, we aimed to protect the eyes with barium sulfate shield(s) during CT scans and investigate the resultant image quality and radiation dose to the eye. Patients who underwent health examinations were selectively enrolled in this study in compliance with the protocol approved by the Ethics Committee of the Joint Institutional Review Board at Taipei Medical University. Participants’ brains were scanned with a water-based marker simultaneously by a multislice CT scanner (SOMATON Definition Flash) under a fixed tube current-time setting or automatic tube current modulation (TCM). The lens dose was measured by Gafchromic films, whose dose response curve was previously fitted using thermoluminescent dosimeters, with or without barium sulfate or bismuth-antimony shield laid above. For the assessment of image quality CT images at slice planes that exhibit the interested regions on the zygomatic, orbital and nasal bones of the head phantom as well as the water-based marker were used for calculating the signal-to-noise and contrast-to-noise ratios. The application of barium sulfate and bismuth-antimony shields decreased 24% and 47% of the lens dose on average, respectively. Under topogram-based TCM, the dose saving power of bismuth-antimony shield was mitigated whereas that of barium sulfate shield was enhanced. On the other hand, the signal-to-noise and contrast-to-noise ratios of DSCT images were decreased separately by barium sulfate and bismuth-antimony shield, resulting in an overall reduction of the CNR. In contrast, the integration of topogram-based TCM elevated signal difference between the ROIs on the zygomatic bones and eyeballs while preferentially decreasing the signal-to-noise ratios upon the use of barium sulfate shield. The results of this study indicate that the balance between eye exposure and image quality can be optimized by combining eye shields with topogram-based TCM on the multislice scanner. Eye shielding could change the photon attenuation characteristics of tissues that are close to the shield. The application of both shields on eye protection hence is not recommended for seeking intraorbital lesions.

Keywords: computed tomography, barium sulfate shield, dose saving, image quality

Procedia PDF Downloads 246
158 The Lonely Entrepreneur: Antecedents and Effects of Social Isolation on Entrepreneurial Intention and Output

Authors: Susie Pryor, Palak Sadhwani

Abstract:

The purpose of this research is to provide the foundations for a broad research agenda examining the role loneliness plays in entrepreneurship. While qualitative research in entrepreneurship incidentally captures the existence of loneliness as a part of the lived reality of entrepreneurs, to the authors’ knowledge, no academic work has to date explored this construct in this context. Moreover, many individuals reporting high levels of loneliness (women, ethnic minorities, immigrants, low income, low education) reflect those who are currently driving small business growth in the United States. Loneliness is a persistent state of emotional distress which results from feelings of estrangement and rejection or develops in the absence of social relationships and interactions. Empirical work finds links between loneliness and depression, suicide and suicide ideation, anxiety, hostility and passiveness, lack of communication and adaptability, shyness, poor social skills and unrealistic social perceptions, self-doubts, fear of rejection, and negative self-evaluation. Lonely individuals have been found to exhibit lower levels of self-esteem, higher levels of introversion, lower affiliative tendencies, less assertiveness, higher sensitivity to rejection, a heightened external locus of control, intensified feelings of regret and guilt over past events and rigid and overly idealistic goals concerning the future. These characteristics are likely to impact entrepreneurs and their work. Research identifies some key dangers of loneliness. Loneliness damages human love and intimacy, can disturb and distract individuals from channeling creative and effective energies in a meaningful way, may result in the formation of premature, poorly thought out and at times even irresponsible decisions, and produce hard and desensitized individuals, with compromised health and quality of life concerns. The current study utilizes meta-analysis and text analytics to distinguish loneliness from other related constructs (e.g., social isolation) and categorize antecedents and effects of loneliness across subpopulations. This work has the potential to materially contribute to the field of entrepreneurship by cleanly defining constructs and providing foundational background for future research. It offers a richer understanding of the evolution of loneliness and related constructs over the life cycle of entrepreneurial start-up and development. Further, it suggests preliminary avenues for exploration and methods of discovery that will result in knowledge useful to the field of entrepreneurship. It is useful to both entrepreneurs and those work with them as well as academics interested in the topics of loneliness and entrepreneurship. It adopts a grounded theory approach.

Keywords: entrepreneurship, grounded theory, loneliness, meta-analysis

Procedia PDF Downloads 96
157 Pyramid of Deradicalization: Causes and Possible Solutions

Authors: Ashir Ahmed

Abstract:

Generally, radicalization happens when a person's thinking and behaviour become significantly different from how most of the members of their society and community view social issues and participate politically. Radicalization often leads to violent extremism that refers to the beliefs and actions of people who support or use violence to achieve ideological, religious or political goals. Studies on radicalization negate the common myths that someone must be in a group to be radicalised or anyone who experiences radical thoughts is a violent extremist. Moreover, it is erroneous to suggest that radicalisation is always linked to religion. Generally, the common motives of radicalization include ideological, issue-based, ethno-nationalist or separatist underpinning. Moreover, there are number of factors that further augments the chances of someone being radicalised and may choose the path of violent extremism and possibly terrorism. Since there are numbers of factors (and sometimes quite different) contributing in radicalization and violent extremism, it is highly unlikely to devise a single solution that could produce effective outcomes to deal with radicalization, violent extremism and terrorism. The pathway to deradicalization, like the pathway to radicalisation, is different for everyone. Considering the need of having customized deradicalization resolution, this study proposes a multi-tier framework, called ‘pyramid of deradicalization’ that first help identifying the stage at which an individual could be on the radicalization pathway and then propose a customize strategy to deal with the respective stage. The first tier (tier 1) addresses broader community and proposes a ‘universal approach’ aiming to offer community-based design and delivery of educational programs to raise awareness and provide general information on possible factors leading to radicalization and their remedies. The second tier focuses on the members of community who are more vulnerable and are disengaged from the rest of the community. This tier proposes a ‘targeted approach’ targeting the vulnerable members of the community through early intervention such as providing anonymous help lines where people feel confident and comfortable in seeking help without fearing the disclosure of their identity. The third tier aims to focus on people having clear evidence of moving toward extremism or getting radicalized. The people falls in this tier are believed to be supported through ‘interventionist approach’. The interventionist approach advocates the community engagement and community-policing, introducing deradicalization programmes to the targeted individuals and looking after their physical and mental health issues. The fourth and the last tier suggests the strategies to deal with people who are actively breaking the law. ‘Enforcement approach’ suggests various approaches such as strong law enforcement, fairness and accuracy in reporting radicalization events, unbiased treatment by law based on gender, race, nationality or religion and strengthen the family connections.It is anticipated that the operationalization of the proposed framework (‘pyramid of deradicalization’) would help in categorising people considering their tendency to become radicalized and then offer an appropriate strategy to make them valuable and peaceful members of the community.

Keywords: deradicalization, framework, terrorism, violent extremism

Procedia PDF Downloads 241
156 Industrial Wastewater from Paper Mills Used for Biofuel Production and Soil Improvement

Authors: Karin M. Granstrom

Abstract:

Paper mills produce wastewater with a high content of organic substances. Treatment usually consists of sedimentation, biological treatment of activated sludge basins, and chemical precipitation. The resulting sludges are currently a waste problem, deposited in landfills or used as low-grade fuels for incineration. There is a growing awareness of the need for energy efficiency and environmentally sound management of sludge. A resource-efficient method would be to digest the wastewater sludges anaerobically to produce biogas, refine the biogas to biomethane for use in the transportation sector, and utilize the resulting digestate for soil improvement. The biomethane yield of pulp and paper wastewater sludge is comparable to that of straw or manure. As a bonus, the digestate has an improved dewaterability compared to the feedstock biosludge. Limitations of this process are predominantly a weak economic viability - necessitating both sufficiently large-scale paper production for the necessary large amounts of produced wastewater sludge, and the resolving of remaining questions on the certifiability of the digestate and thus its sales price. A way to improve the practical and economical feasibility of using paper mill wastewater for biomethane production and soil improvement is to co-digest it with other feedstocks. In this study, pulp and paper sludge were co-digested with (1) silage and manure, (2) municipal sewage sludge, (3) food waste, or (4) microalgae. Biomethane yield analysis was performed in 500 ml batch reactors, using an Automatic Methane Potential Test System at thermophilic temperature, with a 20 days test duration. The results show that (1) the harvesting season of grass silage and manure collection was an important factor for methane production, with spring feedstocks producing much more than autumn feedstock, and pulp mill sludge benefitting the most from co-digestion; (2) pulp and paper mill sludge is a suitable co-substrate to add when a high nitrogen content cause impaired biogas production due to ammonia inhibition; (3) the combination of food waste and paper sludge gave higher methane yield than either of the substrates digested separately; (4) pure microalgae gave the highest methane yield. In conclusion, although pulp and paper mills are an almost untapped resource for biomethane production, their wastewater is a suitable feedstock for such a process. Furthermore, through co-digestion, the pulp and paper mill wastewater and mill sludges can aid biogas production from more nutrient-rich waste streams from other industries. Such co-digestion also enhances the soil improvement properties of the residue digestate.

Keywords: anaerobic, biogas, biomethane, paper, sludge, soil

Procedia PDF Downloads 237
155 Automatic Aggregation and Embedding of Microservices for Optimized Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.

Keywords: aggregation, deployment, embedding, resource allocation

Procedia PDF Downloads 179
154 Changes of Chemical Composition and Physicochemical Properties of Banana during Ethylene-Induced Ripening

Authors: Chiun-C.R. Wang, Po-Wen Yen, Chien-Chun Huang

Abstract:

Banana is produced in large quantities in tropical and subtropical areas. Banana is one of the important fruits which constitute a valuable source of energy, vitamins and minerals. The ripening and maturity standards of banana vary from country to country depending on the expected shelf life of market. The compositions of bananas change dramatically during ethylene-induced ripening that are categorized as nutritive values and commercial utilization. Nevertheless, there is few study reporting the changes of physicochemical properties of banana starch during ethylene-induced ripening of green banana. The objectives of this study were to investigate the changes of chemical composition and enzyme activity of banana and physicochemical properties of banana starch during ethylene-induced ripening. Green bananas were harvested and ripened by ethylene gas at low temperature (15℃) for seven stages. At each stage, banana was sliced and freeze-dried for banana flour preparation. The changes of total starch, resistant starch, chemical compositions, physicochemical properties, activity of amylase, polyphenolic oxidase (PPO) and phenylalanine ammonia lyase (PAL) of banana were analyzed each stage during ripening. The banana starch was isolated and analyzed for gelatinization properties, pasting properties and microscopic appearance each stage of ripening. The results indicated that the highest total starch and resistant starch content of green banana were 76.2% and 34.6%, respectively at the harvest stage. Both total starch and resistant starch content were significantly declined to 25.3% and 8.8%, respectively at the seventh stage. Soluble sugars content of banana increased from 1.21% at harvest stage to 37.72% at seventh stage during ethylene-induced ripening. Swelling power of banana flour decreased with the progress of ripening stage, but solubility increased. These results strongly related with the decreases of starch content of banana flour during ethylene-induced ripening. Both water insoluble and alcohol insoluble solids of banana flour decreased with the progress of ripening stage. Both activity of PPO and PAL increased, but the total free phenolics content decreased, with the increases of ripening stages. As ripening stage extended, the gelatinization enthalpy of banana starch significantly decreased from 15.31 J/g at the harvest stage to 10.55 J/g at the seventh stage. The peak viscosity and setback increased with the progress of ripening stages in the pasting properties of banana starch. The highest final viscosity, 5701 RVU, of banana starch slurry was found at the seventh stage. The scanning electron micrograph of banana starch showed the shapes of banana starch appeared to be round and elongated forms, ranging in 10-50 μm at the harvest stage. As the banana closed to ripe status, some parallel striations were observed on the surface of banana starch granular which could be caused by enzyme reaction during ripening. These results inferred that the highest resistant starch was found in the green banana at the harvest stage could be considered as a potential application of healthy foods. The changes of chemical composition and physicochemical properties of banana could be caused by the hydrolysis of enzymes during the ethylene-induced ripening treatment.

Keywords: ethylene-induced ripening, banana starch, resistant starch, soluble sugars, physicochemical properties, gelatinization enthalpy, pasting characteristics, microscopic appearance

Procedia PDF Downloads 450
153 A Feature Clustering-Based Sequential Selection Approach for Color Texture Classification

Authors: Mohamed Alimoussa, Alice Porebski, Nicolas Vandenbroucke, Rachid Oulad Haj Thami, Sana El Fkihi

Abstract:

Color and texture are highly discriminant visual cues that provide an essential information in many types of images. Color texture representation and classification is therefore one of the most challenging problems in computer vision and image processing applications. Color textures can be represented in different color spaces by using multiple image descriptors which generate a high dimensional set of texture features. In order to reduce the dimensionality of the feature set, feature selection techniques can be used. The goal of feature selection is to find a relevant subset from an original feature space that can improve the accuracy and efficiency of a classification algorithm. Traditionally, feature selection is focused on removing irrelevant features, neglecting the possible redundancy between relevant ones. This is why some feature selection approaches prefer to use feature clustering analysis to aid and guide the search. These techniques can be divided into two categories. i) Feature clustering-based ranking algorithm uses feature clustering as an analysis that comes before feature ranking. Indeed, after dividing the feature set into groups, these approaches perform a feature ranking in order to select the most discriminant feature of each group. ii) Feature clustering-based subset search algorithms can use feature clustering following one of three strategies; as an initial step that comes before the search, binded and combined with the search or as the search alternative and replacement. In this paper, we propose a new feature clustering-based sequential selection approach for the purpose of color texture representation and classification. Our approach is a three step algorithm. First, irrelevant features are removed from the feature set thanks to a class-correlation measure. Then, introducing a new automatic feature clustering algorithm, the feature set is divided into several feature clusters. Finally, a sequential search algorithm, based on a filter model and a separability measure, builds a relevant and non redundant feature subset: at each step, a feature is selected and features of the same cluster are removed and thus not considered thereafter. This allows to significantly speed up the selection process since large number of redundant features are eliminated at each step. The proposed algorithm uses the clustering algorithm binded and combined with the search. Experiments using a combination of two well known texture descriptors, namely Haralick features extracted from Reduced Size Chromatic Co-occurence Matrices (RSCCMs) and features extracted from Local Binary patterns (LBP) image histograms, on five color texture data sets, Outex, NewBarktex, Parquet, Stex and USPtex demonstrate the efficiency of our method compared to seven of the state of the art methods in terms of accuracy and computation time.

Keywords: feature selection, color texture classification, feature clustering, color LBP, chromatic cooccurrence matrix

Procedia PDF Downloads 106
152 Attitude in Academic Writing (CAAW): Corpus Compilation and Annotation

Authors: Hortènsia Curell, Ana Fernández-Montraveta

Abstract:

This paper presents the creation, development, and analysis of a corpus designed to study the presence of attitude markers and author’s stance in research articles in two different areas of linguistics (theoretical linguistics and sociolinguistics). These two disciplines are expected to behave differently in this respect, given the disparity in their discursive conventions. Attitude markers in this work are understood as the linguistic elements (adjectives, nouns and verbs) used to convey the writer's stance towards the content presented in the article, and are crucial in understanding writer-reader interaction and the writer's position. These attitude markers are divided into three broad classes: assessment, significance, and emotion. In addition to them, we also consider first-person singular and plural pronouns and possessives, modal verbs, and passive constructions, which are other linguistic elements expressing the author’s stance. The corpus, Corpus of Attitude in Academic Writing (CAAW), comprises a collection of 21 articles, collected from six journals indexed in JCR. These articles were originally written in English by a single native-speaker author from the UK or USA and were published between 2022 and 2023. The total number of words in the corpus is approximately 222,400, with 106,422 from theoretical linguistics (Lingua, Linguistic Inquiry and Journal of Linguistics) and 116,022 from sociolinguistics journals (International Journal of the Sociology of Language, Language in Society and Journal of Sociolinguistics). Together with the corpus, we present the tool created for the creation and storage of the corpus, along with a tool for automatic annotation. The steps followed in the compilation of the corpus are as follows. First, the articles were selected according to the parameters explained above. Second, they were downloaded and converted to txt format. Finally, examples, direct quotes, section titles and references were eliminated, since they do not involve the author’s stance. The resulting texts were the input for the annotation of the linguistic features related to stance. As for the annotation, two articles (one from each subdiscipline) were annotated manually by the two researchers. An existing list was used as a baseline, and other attitude markers were identified, together with the other elements mentioned above. Once a consensus was reached, the rest of articles were annotated automatically using the tool created for this purpose. The annotated corpus will serve as a resource for scholars working in discourse analysis (both in linguistics and communication) and related fields, since it offers new insights into the expression of attitude. The tools created for the compilation and annotation of the corpus will be useful to study author’s attitude and stance in articles from any academic discipline: new data can be uploaded and the list of markers can be enlarged. Finally, the tool can be expanded to other languages, which will allow cross-linguistic studies of author’s stance.

Keywords: academic writing, attitude, corpus, english

Procedia PDF Downloads 41
151 The Role of People in Continuing Airworthiness: A Case Study Based on the Royal Thai Air Force

Authors: B. Ratchaneepun, N.S. Bardell

Abstract:

It is recognized that people are the main drivers in almost all the processes that affect airworthiness assurance. This is especially true in the area of aircraft maintenance, which is an essential part of continuing airworthiness. This work investigates what impact English language proficiency, the intersection of the military and Thai cultures, and the lack of initial and continuing human factors training have on the work performance of maintenance personnel in the Royal Thai Air Force (RTAF). A quantitative research method based on a cross-sectional survey was used to gather data about these three key aspects of “people” in a military airworthiness environment. 30 questions were developed addressing the crucial topics of English language proficiency, impact of culture, and human factors training. The officers and the non-commissioned officers (NCOs) who work for the Aeronautical Engineering Divisions in the RTAF comprised the survey participants. The survey data were analysed to support various hypotheses by using a t-test method. English competency in the RTAF is very important since all of the service manuals for Thai military aircraft are written in English. Without such competency, it is difficult for maintenance staff to perform tasks and correctly interpret the relevant maintenance manual instructions; any misunderstandings could lead to potential accidents. The survey results showed that the officers appreciated the importance of this more than the NCOs, who are the people actually doing the hands-on maintenance work. Military culture focuses on the success of a given mission, and leverages the power distance between the lower and higher ranks. In Thai society, a power distance also exists between younger and older citizens. In the RTAF, such a combination tends to inhibit a just reporting culture and hence hinders safety. The survey results confirmed this, showing that the older people and higher ranks involved with RTAF aircraft maintenance believe that the workplace has a positive safety culture and climate, whereas the younger people and lower ranks think the opposite. The final area of consideration concerned human factors training and non-technical skills training. The survey revealed that those participants who had previously attended such courses appreciated its value and were aware of its benefits in daily life. However, currently there is no regulation in the RTAF to mandate recurrent training to maintain such knowledge and skills. The findings from this work suggest that the people involved in assuring the continuing airworthiness of the RTAF would benefit from: (i) more rigorous requirements and standards in the recruitment, initial training and continuation training regarding English competence; (ii) the development of a strong safety culture that exploits the uniqueness of both the military culture and the Thai culture; and (iii) providing more initial and recurrent training in human factors and non-technical skills.

Keywords: aircraft maintenance, continuing airworthiness, military culture, people, Royal Thai Air Force

Procedia PDF Downloads 112
150 Comparing Deep Architectures for Selecting Optimal Machine Translation

Authors: Despoina Mouratidis, Katia Lida Kermanidis

Abstract:

Machine translation (MT) is a very important task in Natural Language Processing (NLP). MT evaluation is crucial in MT development, as it constitutes the means to assess the success of an MT system, and also helps improve its performance. Several methods have been proposed for the evaluation of (MT) systems. Some of the most popular ones in automatic MT evaluation are score-based, such as the BLEU score, and others are based on lexical similarity or syntactic similarity between the MT outputs and the reference involving higher-level information like part of speech tagging (POS). This paper presents a language-independent machine learning framework for classifying pairwise translations. This framework uses vector representations of two machine-produced translations, one from a statistical machine translation model (SMT) and one from a neural machine translation model (NMT). The vector representations consist of automatically extracted word embeddings and string-like language-independent features. These vector representations used as an input to a multi-layer neural network (NN) that models the similarity between each MT output and the reference, as well as between the two MT outputs. To evaluate the proposed approach, a professional translation and a "ground-truth" annotation are used. The parallel corpora used are English-Greek (EN-GR) and English-Italian (EN-IT), in the educational domain and of informal genres (video lecture subtitles, course forum text, etc.) that are difficult to be reliably translated. They have tested three basic deep learning (DL) architectures to this schema: (i) fully-connected dense, (ii) Convolutional Neural Network (CNN), and (iii) Long Short-Term Memory (LSTM). Experiments show that all tested architectures achieved better results when compared against those of some of the well-known basic approaches, such as Random Forest (RF) and Support Vector Machine (SVM). Better accuracy results are obtained when LSTM layers are used in our schema. In terms of a balance between the results, better accuracy results are obtained when dense layers are used. The reason for this is that the model correctly classifies more sentences of the minority class (SMT). For a more integrated analysis of the accuracy results, a qualitative linguistic analysis is carried out. In this context, problems have been identified about some figures of speech, as the metaphors, or about certain linguistic phenomena, such as per etymology: paronyms. It is quite interesting to find out why all the classifiers led to worse accuracy results in Italian as compared to Greek, taking into account that the linguistic features employed are language independent.

Keywords: machine learning, machine translation evaluation, neural network architecture, pairwise classification

Procedia PDF Downloads 107