Search results for: André Python
67 Social Media Data Analysis for Personality Modelling and Learning Styles Prediction Using Educational Data Mining
Authors: Srushti Patil, Preethi Baligar, Gopalkrishna Joshi, Gururaj N. Bhadri
Abstract:
In designing learning environments, the instructional strategies can be tailored to suit the learning style of an individual to ensure effective learning. In this study, the information shared on social media like Facebook is being used to predict learning style of a learner. Previous research studies have shown that Facebook data can be used to predict user personality. Users with a particular personality exhibit an inherent pattern in their digital footprint on Facebook. The proposed work aims to correlate the user's’ personality, predicted from Facebook data to the learning styles, predicted through questionnaires. For Millennial learners, Facebook has become a primary means for information sharing and interaction with peers. Thus, it can serve as a rich bed for research and direct the design of learning environments. The authors have conducted this study in an undergraduate freshman engineering course. Data from 320 freshmen Facebook users was collected. The same users also participated in the learning style and personality prediction survey. The Kolb’s Learning style questionnaires and Big 5 personality Inventory were adopted for the survey. The users have agreed to participate in this research and have signed individual consent forms. A specific page was created on Facebook to collect user data like personal details, status updates, comments, demographic characteristics and egocentric network parameters. This data was captured by an application created using Python program. The data captured from Facebook was subjected to text analysis process using the Linguistic Inquiry and Word Count dictionary. An analysis of the data collected from the questionnaires performed reveals individual student personality and learning style. The results obtained from analysis of Facebook, learning style and personality data were then fed into an automatic classifier that was trained by using the data mining techniques like Rule-based classifiers and Decision trees. This helps to predict the user personality and learning styles by analysing the common patterns. Rule-based classifiers applied for text analysis helps to categorize Facebook data into positive, negative and neutral. There were totally two models trained, one to predict the personality from Facebook data; another one to predict the learning styles from the personalities. The results show that the classifier model has high accuracy which makes the proposed method to be a reliable one for predicting the user personality and learning styles.Keywords: educational data mining, Facebook, learning styles, personality traits
Procedia PDF Downloads 23266 Curriculum Transformation: Multidisciplinary Perspectives on ‘Decolonisation’ and ‘Africanisation’ of the Curriculum in South Africa’s Higher Education
Authors: Andre Bechuke
Abstract:
The years of 2015-2017 witnessed a huge campaign, and in some instances, violent protests in South Africa by students and some groups of academics advocating the decolonisation of the curriculum of universities. These protests have forced through high expectations for universities to teach a curriculum relevant to the country, and the continent as well as enabled South Africa to participate in the globalised world. To realise this purpose, most universities are currently undertaking steps to transform and decolonise their curriculum. However, the transformation process is challenged and delayed by lack of a collective understanding of the concepts ‘decolonisation’ and ‘africanisation’ that should guide its application. Even more challenging is lack of a contextual understanding of these concepts across different university disciplines. Against this background, and underpinned in a qualitative research paradigm, the perspectives of these concepts as applied by different university disciplines were examined in order to understand and establish their implementation in the curriculum transformation agenda. Data were collected by reviewing the teaching and learning plans of 8 faculties of an institution of higher learning in South Africa and analysed through content and textual analysis. The findings revealed varied understanding and use of these concepts in the transformation of the curriculum across faculties. Decolonisation, according to the faculties of Law and Humanities, is perceived as the eradication of the Eurocentric positioning in curriculum content and the constitutive rules and norms that control thinking. This is not done by ignoring other knowledge traditions but does call for an affirmation and validation of African views of the world and systems of thought, mixing it with current knowledge. For the Faculty of Natural and Agricultural Sciences, decolonisation is seen as making the content of the curriculum relevant to students, fulfilling the needs of industry and equipping students for job opportunities. This means the use of teaching strategies and methods that are inclusive of students from diverse cultures, and to structure the learning experience in ways that are not alien to the cultures of the students. For the Health Sciences, decolonisation of the curriculum refers to the need for a shift in Western thinking towards being more sensitive to all cultural beliefs and thoughts. Collectively, decolonisation of education thus entails that a nation must become independent with regard to the acquisition of knowledge, skills, values, beliefs, and habits. Based on the findings, for universities to successfully transform their curriculum and integrate the concepts of decolonisation and Africanisation, there is a need to contextually determine the meaning of the concepts generally and narrow them down to what they should mean to specific disciplines. Universities should refrain from considering an umbrella approach to these concepts. Decolonisation should be seen as a means and not an end. A decolonised curriculum should equally be developed based on the finest knowledge skills, values, beliefs and habits around the world and not limited to one country or continent.Keywords: Africanisation, curriculum, transformation, decolonisation, multidisciplinary perspectives, South Africa’s higher education
Procedia PDF Downloads 16565 The Relationship Between Military Expenditure and International Trade: A Selection of African Countries
Authors: Andre C Jordaan
Abstract:
The end of the Cold War and rivalry between super powers has changed the nature of military build-up in many countries. A call from international institutions like the United Nations, International Monetary Fund and the World Bank to reduce the levels of military expenditure was the order of the day. However, this bid to cut military expenditure has not been forthright. Recently, active armed conflicts occurred in at least 46 states in 2021 with 8 in the Americas, 9 in Asia and Oceania, 3 in Europe, 8 in the Middle East and North Africa and 18 in sub-Saharan Africa. Global military expenditure in 2022 was estimated to be US$2,2 trillion, representing 2.2 per cent of global gross domestic product. Particularly sharp rises in military spending have followed in African countries and the Middle East. Global military expenditure currently follows two divergent trends, either a declining trend in the West caused mainly by austerity, efforts to control budget deficits and the wrapping up of prolonged wars. However, some parts of the world shows an increasing trend on the back of security concerns, geopolitical ambitions and some internal political factors. Conflict related fatalities in sub-Saharan Africa alone increased by 19 per cent between 2020 and 2021. The interaction between military expenditure (read conflict) and international trade is generally the cause of much debate. Some argue that countries’ fear of losing trade opportunities causes political decision makers to refrain from engaging in conflict when important trading partners are involved. However, three main arguments are always present when discussing the relationship between military expenditure or conflicts and international trade: Free trade could promote peaceful cooperation, it could trigger tension between trading blocs and partners, and trade could have no effect because conflict is based on issues that are more important. Military expenditure remains an important element of the overall government expenditure in many African countries. On the other hand, numerous researchers perceive increased international trade to be one of the main factors promoting economic growth in these countries. The purpose of this paper is therefore to determine what effect, if any, exist between the level of military expenditure and international trade within a selection of 19 African countries. Applying an augmented gravity model to explore the relationship between military expenditure and international trade, evidence is found to confirm the existence of an inverse relationship between these two variables. It seems that the results are in line with the Liberal school of thought where trade is seen as an instrument of conflict prevention. Trade is therefore perceived as a symptom of peace and not a cause thereof. In general, conflict or rumors of conflict tend to reduce trade. If conflict did not impede trade, economic agents would be indifferent to risk. Many claim that trade brings peace, however, it seems that it is rather peace that brings trade. From the results, it appears that trade reduces the risk of conflict and that conflict reduces trade.Keywords: African countries, conflict, international trade, military expenditure
Procedia PDF Downloads 6764 Spatial Pattern and Predictors of Malaria in Ethiopia: Application of Auto Logistics Spatial Regression
Authors: Melkamu A. Zeru, Yamral M. Warkaw, Aweke A. Mitku, Muluwerk Ayele
Abstract:
Introduction: Malaria is a severe health threat in the World, mainly in Africa. It is the major cause of health problems in which the risk of morbidity and mortality associated with malaria cases are characterized by spatial variations across the county. This study aimed to investigate the spatial patterns and predictors of malaria distribution in Ethiopia. Methods: A weighted sample of 15,239 individuals with rapid diagnosis tests was obtained from the Central Statistical Agency and Ethiopia malaria indicator survey of 2015. Global Moran's I and Moran scatter plots were used in determining the distribution of malaria cases, whereas the local Moran's I statistic was used in identifying exposed areas. In data manipulation, machine learning was used for variable reduction and statistical software R, Stata, and Python were used for data management and analysis. The auto logistics spatial binary regression model was used to investigate the predictors of malaria. Results: The final auto logistics regression model reported that male clients had a positive significant effect on malaria cases as compared to female clients [AOR=2.401, 95 % CI: (2.125 - 2.713)]. The distribution of malaria across the regions was different. The highest incidence of malaria was found in Gambela [AOR=52.55, 95%CI: (40.54-68.12)] followed by Beneshangul [AOR=34.95, 95%CI: (27.159 - 44.963)]. Similarly, individuals in Amhara [AOR=0.243, 95% CI:(0.1950.303],Oromiya[AOR=0.197,95%CI:(0.1580.244)],DireDawa[AOR=0.064,95%CI(0.049-0.082)],AddisAbaba[AOR=0.057,95%CI:(0.044-0.075)], Somali[AOR=0.077,95%CI:(0.059-0.097)], SNNPR[OR=0.329, 95%CI: (0.261- 0.413)] and Harari [AOR=0.256, 95%CI:(0.201 - 0.325)] were less likely to had low incidence of malaria as compared with Tigray. Furthermore, for a one-meter increase in altitude, the odds of a positive rapid diagnostic test (RDT) decrease by 1.6% [AOR = 0.984, 95% CI :( 0.984 - 0.984)]. The use of a shared toilet facility was found as a protective factor for malaria in Ethiopia [AOR=1.671, 95% CI: (1.504 - 1.854)]. The spatial autocorrelation variable changes the constant from AOR = 0.471 for logistic regression to AOR = 0.164 for auto logistics regression. Conclusions: This study found that the incidence of malaria in Ethiopia had a spatial pattern that is associated with socio-economic, demographic, and geographic risk factors. Spatial clustering of malaria cases had occurred in all regions, and the risk of clustering was different across the regions. The risk of malaria was found to be higher for those who live in soil floor-type houses as compared to those who live in cement or ceramics floor type. Similarly, households with thatched, metal and thin, and other roof-type houses have a higher risk of malaria than ceramic tiles roof houses. Moreover, using a protected anti-mosquito net reduced the risk of malaria incidence.Keywords: malaria, Ethiopia, auto logistics, spatial model, spatial clustering
Procedia PDF Downloads 4063 Bayesian Structural Identification with Systematic Uncertainty Using Multiple Responses
Authors: André Jesus, Yanjie Zhu, Irwanda Laory
Abstract:
Structural health monitoring is one of the most promising technologies concerning aversion of structural risk and economic savings. Analysts often have to deal with a considerable variety of uncertainties that arise during a monitoring process. Namely the widespread application of numerical models (model-based) is accompanied by a widespread concern about quantifying the uncertainties prevailing in their use. Some of these uncertainties are related with the deterministic nature of the model (code uncertainty) others with the variability of its inputs (parameter uncertainty) and the discrepancy between a model/experiment (systematic uncertainty). The actual process always exhibits a random behaviour (observation error) even when conditions are set identically (residual variation). Bayesian inference assumes that parameters of a model are random variables with an associated PDF, which can be inferred from experimental data. However in many Bayesian methods the determination of systematic uncertainty can be problematic. In this work systematic uncertainty is associated with a discrepancy function. The numerical model and discrepancy function are approximated by Gaussian processes (surrogate model). Finally, to avoid the computational burden of a fully Bayesian approach the parameters that characterise the Gaussian processes were estimated in a four stage process (modular Bayesian approach). The proposed methodology has been successfully applied on fields such as geoscience, biomedics, particle physics but never on the SHM context. This approach considerably reduces the computational burden; although the extent of the considered uncertainties is lower (second order effects are neglected). To successfully identify the considered uncertainties this formulation was extended to consider multiple responses. The efficiency of the algorithm has been tested on a small scale aluminium bridge structure, subjected to a thermal expansion due to infrared heaters. Comparison of its performance with responses measured at different points of the structure and associated degrees of identifiability is also carried out. A numerical FEM model of the structure was developed and the stiffness from its supports is considered as a parameter to calibrate. Results show that the modular Bayesian approach performed best when responses of the same type had the lowest spatial correlation. Based on previous literature, using different types of responses (strain, acceleration, and displacement) should also improve the identifiability problem. Uncertainties due to parametric variability, observation error, residual variability, code variability and systematic uncertainty were all recovered. For this example the algorithm performance was stable and considerably quicker than Bayesian methods that account for the full extent of uncertainties. Future research with real-life examples is required to fully access the advantages and limitations of the proposed methodology.Keywords: bayesian, calibration, numerical model, system identification, systematic uncertainty, Gaussian process
Procedia PDF Downloads 33062 Scalable UI Test Automation for Large-scale Web Applications
Authors: Kuniaki Kudo, Raviraj Solanki, Kaushal Patel, Yash Virani
Abstract:
This research mainly concerns optimizing UI test automation for large-scale web applications. The test target application is the HHAexchange homecare management WEB application that seamlessly connects providers, state Medicaid programs, managed care organizations (MCOs), and caregivers through one platform with large-scale functionalities. This study focuses on user interface automation testing for the WEB application. The quality assurance team must execute many manual users interface test cases in the development process to confirm no regression bugs. The team automated 346 test cases; the UI automation test execution time was over 17 hours. The business requirement was reducing the execution time to release high-quality products quickly, and the quality assurance automation team modernized the test automation framework to optimize the execution time. The base of the WEB UI automation test environment is Selenium, and the test code is written in Python. Adopting a compilation language to write test code leads to an inefficient flow when introducing scalability into a traditional test automation environment. In order to efficiently introduce scalability into Test Automation, a scripting language was adopted. The scalability implementation is mainly implemented with AWS's serverless technology, an elastic container service. The definition of scalability here is the ability to automatically set up computers to test automation and increase or decrease the number of computers running those tests. This means the scalable mechanism can help test cases run parallelly. Then test execution time is dramatically decreased. Also, introducing scalable test automation is for more than just reducing test execution time. There is a possibility that some challenging bugs are detected by introducing scalable test automation, such as race conditions, Etc. since test cases can be executed at same timing. If API and Unit tests are implemented, the test strategies can be adopted more efficiently for this scalability testing. However, in WEB applications, as a practical matter, API and Unit testing cannot cover 100% functional testing since they do not reach front-end codes. This study applied a scalable UI automation testing strategy to the large-scale homecare management system. It confirmed the optimization of the test case execution time and the detection of a challenging bug. This study first describes the detailed architecture of the scalable test automation environment, then describes the actual performance reduction time and an example of challenging issue detection.Keywords: aws, elastic container service, scalability, serverless, ui automation test
Procedia PDF Downloads 11061 Stable Diffusion, Context-to-Motion Model to Augmenting Dexterity of Prosthetic Limbs
Authors: André Augusto Ceballos Melo
Abstract:
Design to facilitate the recognition of congruent prosthetic movements, context-to-motion translations guided by image, verbal prompt, users nonverbal communication such as facial expressions, gestures, paralinguistics, scene context, and object recognition contributes to this process though it can also be applied to other tasks, such as walking, Prosthetic limbs as assistive technology through gestures, sound codes, signs, facial, body expressions, and scene context The context-to-motion model is a machine learning approach that is designed to improve the control and dexterity of prosthetic limbs. It works by using sensory input from the prosthetic limb to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. This can help to improve the performance of the prosthetic limb and make it easier for the user to perform a wide range of tasks. There are several key benefits to using the context-to-motion model for prosthetic limb control. First, it can help to improve the naturalness and smoothness of prosthetic limb movements, which can make them more comfortable and easier to use for the user. Second, it can help to improve the accuracy and precision of prosthetic limb movements, which can be particularly useful for tasks that require fine motor control. Finally, the context-to-motion model can be trained using a variety of different sensory inputs, which makes it adaptable to a wide range of prosthetic limb designs and environments. Stable diffusion is a machine learning method that can be used to improve the control and stability of movements in robotic and prosthetic systems. It works by using sensory feedback to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. One key aspect of stable diffusion is that it is designed to be robust to noise and uncertainty in the sensory feedback. This means that it can continue to produce stable, smooth movements even when the sensory data is noisy or unreliable. To implement stable diffusion in a robotic or prosthetic system, it is typically necessary to first collect a dataset of examples of the desired movements. This dataset can then be used to train a machine learning model to predict the appropriate control inputs for a given set of sensory observations. Once the model has been trained, it can be used to control the robotic or prosthetic system in real-time. The model receives sensory input from the system and uses it to generate control signals that drive the motors or actuators responsible for moving the system. Overall, the use of the context-to-motion model has the potential to significantly improve the dexterity and performance of prosthetic limbs, making them more useful and effective for a wide range of users Hand Gesture Body Language Influence Communication to social interaction, offering a possibility for users to maximize their quality of life, social interaction, and gesture communication.Keywords: stable diffusion, neural interface, smart prosthetic, augmenting
Procedia PDF Downloads 10660 Wildlife Habitat Corridor Mapping in Urban Environments: A GIS-Based Approach Using Preliminary Category Weightings
Authors: Stefan Peters, Phillip Roetman
Abstract:
The global loss of biodiversity is threatening the benefits nature provides to human populations and has become a more pressing issue than climate change and requires immediate attention. While there have been successful global agreements for environmental protection, such as the Montreal Protocol, these are rare, and we cannot rely on them solely. Thus, it is crucial to take national and local actions to support biodiversity. Australia is one of the 17 countries in the world with a high level of biodiversity, and its cities are vital habitats for endangered species, with more of them found in urban areas than in non-urban ones. However, the protection of biodiversity in metropolitan Adelaide has been inadequate, with over 130 species disappearing since European colonization in 1836. In this research project we conceptualized, developed and implemented a framework for wildlife Habitat Hotspots and Habitat Corridor modelling in an urban context using geographic data and GIS modelling and analysis. We used detailed topographic and other geographic data provided by a local council, including spatial and attributive properties of trees, parcels, water features, vegetated areas, roads, verges, traffic, and census data. Weighted factors considered in our raster-based Habitat Hotspot model include parcel size, parcel shape, population density, canopy cover, habitat quality and proximity to habitats and water features. Weighted factors considered in our raster-based Habitat Corridor model include habitat potential (resulting from the Habitat Hotspot model), verge size, road hierarchy, road widths, human density, and presence of remnant indigenous vegetation species. We developed a GIS model, using Python scripting and ArcGIS-Pro Model-Builder, to establish an automated reproducible and adjustable geoprocessing workflow, adaptable to any study area of interest. Our habitat hotspot and corridor modelling framework allow to determine and map existing habitat hotspots and wildlife habitat corridors. Our research had been applied to the study case of Burnside, a local council in Adelaide, Australia, which encompass an area of 30 km2. We applied end-user expertise-based category weightings to refine our models and optimize the use of our habitat map outputs towards informing local strategic decision-making.Keywords: biodiversity, GIS modeling, habitat hotspot, wildlife corridor
Procedia PDF Downloads 12059 Good Functional Outcome after Late Surgical Treatment for Traumatic Rotator Cuff Tear, a Retrospective Cohort Study
Authors: Soheila Zhaeentan, Anders Von Heijne, Elisabet Hagert, André Stark, Björn Salomonsson
Abstract:
Recommended treatment for traumatic rotator cuff tear (TRCT) is surgery within a few weeks after injury if the diagnosis is made early, especially if a functional impairment of the shoulder exists. This may lead to the assumption that a poor outcome then can be expected in delayed surgical treatment, when the patient is diagnosed at a later stage. The aim of this study was to investigate if a surgical repair later than three months after injury may result in successful outcomes and patient satisfaction. There is evidence in literature that good results of treatment can be expected up to three months after the injury, but little is known of later treatment with cuff repair. 73 patients (75 shoulders), 58 males/17 females, mean age 59 (range 34-‐72), who had undergone surgical intervention for TRCT between January 1999 to December 2011 at our clinic, were included in this study. Patients were assessed by MRI investigation, clinical examination, Western Ontario Rotator Cuff index (WORC), Oxford Shoulder Score, Constant-‐Murley Score, EQ-‐5D and patient subjective satisfaction at follow-‐up. The patients treated surgically within three months ( < 12 weeks) after injury (39 cases) were compared with patients treated more than three months ( ≥ 12 weeks) after injury (36 cases). WORC was used as the primary outcome measure and the other variables as secondary. A senior consultant radiologist, blinded to patient category and clinical outcome, evaluated all MRI-‐images. Rotator cuff integrity, presence of arthritis, fatty degeneration and muscle atrophy was evaluated in all cases. The average follow-‐up time was 56 months (range 14-‐149) and the average time from injury to repair was 16 weeks (range 3-‐104). No statistically significant differences were found for any of the assessed parameters or scores between the two groups. The mean WORC score was 77 (early group, range 25-‐ 100 and late group, range 27-‐100) for both groups (p= 0.86), Constant-‐Murley Score (p= 0.91), Oxford Shoulder Score (p= 0.79), EQ-‐5D index (p= 0.86). Re-‐tear frequency was 24% for both groups, and the patients with re-‐tear reported less satisfaction with outcome. Discussion and conclusion: This study shows that surgical repair of TRCT performed later than three months after injury may result in good functional outcomes and patient satisfaction. However, this does not motivate an intentional delay in surgery when there is an indication for surgical repair as that delay may adversely affect the possibility to perform a repair. Our results show that surgeons may safely consider surgical repair even if a delay in diagnosis has occurred. A retrospective cohort study on 75 shoulders shows good functional result after traumatic rotator cuff tear (TRCT) treated surgically up to one year after the injury.Keywords: traumatic rotator cuff injury, time to surgery, surgical outcome, retrospective cohort study
Procedia PDF Downloads 22758 Investigating the Aerosol Load of Eastern Mediterranean Basin with Sentinel-5p Satellite
Authors: Deniz Yurtoğlu
Abstract:
Aerosols directly affect the radiative balance of the earth by absorbing and/or scattering the sun rays reaching the atmosphere and indirectly affect the balance by acting as a nucleus in cloud formation. The composition, physical, and chemical properties of aerosols vary depending on their sources and the time spent in the atmosphere. The Eastern Mediterranean Basin has a high aerosol load that is formed from different sources; such as anthropogenic activities, desert dust outbreaks, and the spray of sea salt; and the area is subjected to atmospheric transport from other locations on the earth. This region, which includes the deserts of Africa, the Middle East, and the Mediterranean sea, is one of the most affected areas by climate change due to its location and the chemistry of the atmosphere. This study aims to investigate the spatiotemporal deviation of aerosol load in the Eastern Mediterranean Basin between the years 2018-2022 with the help of a new pioneer satellite of ESA (European Space Agency), Sentinel-5P. The TROPOMI (The TROPOspheric Monitoring Instrument) traveling on this low-Earth orbiting satellite is a UV (Ultraviolet)-sensing spectrometer with a resolution of 5.5 km x 3.5 km, which can make measurements even in a cloud-covered atmosphere. By using Absorbing Aerosol Index data produced by this spectrometer and special scripts written in Python language that transforms this data into images, it was seen that the majority of the aerosol load in the Eastern Mediterranean Basin is sourced from desert dust and anthropogenic activities. After retrieving the daily data, which was separated from the NaN values, seasonal analyses match with the normal aerosol variations expected, which are high in warm seasons and lower in cold seasons. Monthly analyses showed that in four years, there was an increase in the amount of Absorbing Aerosol Index in spring and winter by 92.27% (2019-2021) and 39.81% (2019-2022), respectively. On the other hand, in the summer and autumn seasons, a decrease has been observed by 20.99% (2018-2021) and 0.94% (2018-2021), respectively. The overall variation of the mean absorbing aerosol index from TROPOMI between April 2018 to April 2022 reflects a decrease of 115.87% by annual mean from 0.228 to -0.036. However, when the data is analyzed by the annual mean values of the years which have the data from January to December, meaning from 2019 to 2021, there was an increase of 57.82% increase (0.108-0.171). This result can be interpreted as the effect of climate change on the aerosol load and also, more specifically, the effect of forest fires that happened in the summer months of 2021.Keywords: aerosols, eastern mediterranean basin, sentinel-5p, tropomi, aerosol index, remote sensing
Procedia PDF Downloads 7157 Examining the Drivers of Engagement in Social Media Brand Communities
Authors: Rania S. Hussein
Abstract:
This research mainly focuses on examining engagement in social media brand communities. Engagement in social media has become a main focus in literature affirming that the role of social media in our daily lives is growing. (Akman and Mishra, 2017;Prado-Gascó et al., 2017). Social media has also become a key medium for brand communication and brand building relationships(Frimpong and McLean,2018;Dimitriu and Guesalaga, 2017). Engagement on social media has become a main focus of many researchers who tried to understand this concept further and draw a link between engagement and various social media activities (Cvijikj and Michahelles;2013), Andre,2015; Wang et al., 2015). According to Felix et al. (2017), the internet and social media have provided better digital resources to improve brand loyalty and customer interactions, thus leading to social media engagement within brand communities. The aim of this research is to highlight the importance of social media and why it is important to maintain engagement within social media. While the term ‘engagement’ is widely used in scholarly literature, there isn’t a common consensus about what the term exactly entails, according to Kidd, (2011). On one hand, it was seen as something that includes factors such as participation, activation, empowerment, devotion, trust, and productivity (Zhang et al, andBenyoucef, M. (2016), ). Other scholars held different viewpoints. For example, Lim et al. (2015) has chosen to break down engagement into three types: operational engagement, emotional engagement, and relational engagement. Chandler and Lusch (2015) further studied engagement as a means to measure commitment to a brand. Fernandes&Remelhe (2016) had a more technical view, measuring engagement through comments, following, subscribing, sharing, enjoying, writing, etc., in the social media context. ustomer engagement has become a research focus for understanding how consumer relationships are developed, retained, and improved within a digital context. Based on previous literature, it is evident that many customer engagement related studies are limited to the interaction between firms and consumers on social media. There is a clear gap in the literature regarding consumer-to-consumer interaction and user-generated content and its significance. While some researchers, such as Alversia et al. (2016), touched upon the importance of customer-based engagement, a gap still remains: there is no consistent and well-tested method for defining the factors that affect consumer interaction. Moreover, few scholarly research papers such as (Case, 2019; Riley, 2020;Habibi, 2014) provided to assist businesses understand their customers' interaction habits as well as the best ways to develop customer loyalty. Additionally, the majority of research on brand pages concentrated on the drivers of Consumer engagement, with just a few studies example, Lamberton, Cc(2016), Poorrezaei, (2016). (Jayasingh, 2019), looking into the implications. This study focuses on understanding the concept of engagement and its importance, specifically engagement within social media brand communities. It examines drivers as well as consequences of engagement, including brand knowledge, brand trust, entertainment, and brand page interactivity. Brand engagement is also expected to affect brand loyalty and word of the mouth.Keywords: engagement, social media, brand communities, drivers
Procedia PDF Downloads 16856 Inhibition of Influenza Replication through the Restrictive Factors Modulation by CCR5 and CXCR4 Receptor Ligands
Authors: Thauane Silva, Gabrielle do Vale, Andre Ferreira, Marilda Siqueira, Thiago Moreno L. Souza, Milene D. Miranda
Abstract:
The exposure of A(H1N1)pdm09-infected epithelial cells (HeLa) to HIV-1 viral particles, or its gp120, enhanced interferon-induced transmembrane protein (IFITM3) content, a viral restriction factor (RF), resulting in a decrease in influenza replication. The gp120 binds to CCR5 (R5) or CXCR4 (X4) cell receptors during HIV-1 infection. Then, it is possible that the endogenous ligands of these receptors also modulate the expression of IFITM3 and other cellular factors that restrict influenza virus replication. Thus, the aim of this study is to analyze the role of cellular receptors R5 and X4 in modulating RFs in order to inhibit the replication of the influenza virus. A549 cells were treated with 2x effective dose (ED50) of endogenous R5 or X4 receptor agonists, CCL3 (20 ng/ml), CCL4 (10 ng/ml), CCL5 (10 ng/ml) and CXCL12 (100 ng/mL) or exogenous agonists, gp120 Bal-R5, gp120 IIIB-X4 and its mutants (5 µg/mL). The interferon α (10 ng/mL) and oseltamivir (60 nM) were used as a control. After 24 h post agonists exposure, the cells were infected with virus influenza A(H3N2) at 2 MOI (multiplicity of infection) for 1 h. Then, 24 h post infection, the supernatant was harvested and, the viral titre was evaluated by qRT-PCR. To evaluate IFITM3 and SAM and HD domain containing deoxynucleoside triphosphate triphosphohydrolase 1 (SAMHD1) protein levels, A549 were exposed to agonists for 24 h, and the monolayer was lysed with Laemmli buffer for western blot (WB) assay or fixed for indirect immunofluorescence (IFI) assay. In addition to this, we analyzed other RFs modulation in A549, after 24 h post agonists exposure by customized RT² Profiler Polymerase Chain Reaction Array. We also performed a functional assay in which SAMHD1-knocked-down, by single-stranded RNA (siRNA), A549 cells were infected with A(H3N2). In addition, the cells were treated with guanosine to assess the regulatory role of dNTPs by SAMHD1. We found that R5 and X4 agonists inhibited influenza replication in 54 ± 9%. We observed a four-fold increase in SAMHD1 transcripts by RFs mRNA quantification panel. After 24 h post agonists exposure, we did not observe an increase in IFITM3 protein levels through WB or IFI assays, but we observed an upregulation up to three-fold in the protein content of SAMHD1, in A549 exposed to agonists. Besides this, influenza replication enhanced in 20% in cell cultures that SAMDH1 was knockdown. Guanosine treatment in cells exposed to R5 ligands further inhibited influenza virus replication, suggesting that the inhibitory mechanism may involve the activation of the SAMHD1 deoxynucleotide triphosphohydrolase activity. Thus, our data show for the first time a direct relationship of SAMHD1 and inhibition of influenza replication, and provides perspectives for new studies on the signaling modulation, through cellular receptors, to induce proteins of great importance in the control of relevant infections for public health.Keywords: chemokine receptors, gp120, influenza, virus restriction factors
Procedia PDF Downloads 14555 On-Ice Force-Velocity Modeling Technical Considerations
Authors: Dan Geneau, Mary Claire Geneau, Seth Lenetsky, Ming -Chang Tsai, Marc Klimstra
Abstract:
Introduction— Horizontal force-velocity profiling (HFVP) involves modeling an athletes linear sprint kinematics to estimate valuable maximum force and velocity metrics. This approach to performance modeling has been used in field-based team sports and has recently been introduced to ice-hockey as a forward skating performance assessment. While preliminary data has been collected on ice, distance constraints of the on-ice test restrict the ability of the athletes to reach their maximal velocity which result in limits of the model to effectively estimate athlete performance. This is especially true of more elite athletes. This report explores whether athletes on-ice are able to reach a velocity plateau similar to what has been seen in overground trials. Fourteen male Major Junior ice-hockey players (BW= 83.87 +/- 7.30 kg, height = 188 ± 3.4cm cm, age = 18 ± 1.2 years n = 14) were recruited. For on-ice sprints, participants completed a standardized warm-up consisting of skating and dynamic stretching and a progression of three skating efforts from 50% to 95%. Following the warm-up, participants completed three on ice 45m sprints, with three minutes of rest in between each trial. For overground sprints, participants completed a similar dynamic warm-up to that of on-ice trials. Following the warm-up participants completed three 40m overground sprint trials. For each trial (on-ice and overground), radar was used to collect instantaneous velocity (Stalker ATS II, Texas, USA) aimed at the participant’s waist. Sprint velocities were modelled using custom Python (version 3.2) script using a mono-exponential function, similar to previous work. To determine if on-ice tirals were achieving a maximum velocity (plateau), minimum acceleration values of the modeled data at the end of the sprint were compared (using paired t-test) between on-ice and overground trials. Significant differences (P<0.001) between overground and on-ice minimum accelerations were observed. It was found that on-ice trials consistently reported higher final acceleration values, indicating a maximum maintained velocity (plateau) had not been reached. Based on these preliminary findings, it is suggested that reliable HFVP metrics cannot yet be collected from all ice-hockey populations using current methods. Elite male populations were not able to achieve a velocity plateau similar to what has been seen in overground trials, indicating the absence of a maximum velocity measure. With current velocity and acceleration modeling techniques, including a dependency of a velocity plateau, these results indicate the potential for error in on-ice HFVP measures. Therefore, these findings suggest that a greater on-ice sprint distance may be required or the need for other velocity modeling techniques, where maximal velocity is not required for a complete profile.Keywords: ice-hockey, sprint, skating, power
Procedia PDF Downloads 10554 The Influence of the State on the Internal Governance of Universities: A Comparative Study of Quebec (Canada) and Western Systems
Authors: Alexandre Beaupré-Lavallée, Pier-André Bouchard St-Amant, Nathalie Beaulac
Abstract:
The question of internal governance of universities is a political and scientific debate in the province of Quebec (Canada). Governments have called or set up inquiries on the subject on three separate occasions since the complete overhaul of the educational system in the 1960s: the Parent Commission (1967), the Angers Commission (1979) and the Summit on Higher Education (2013). All three produced reports that highlight the constant tug-of-war for authority and legitimacy within universities. Past and current research that cover Quebec universities have studied several aspects regarding internal governance: the structure as a whole or only some parts of it, the importance of certain key aspects such as collegiality or strategic planning, or of stakeholders, such as students or administrators. External governance has also been studied, though, as with internal governance, research so far as only covered well delineated topics like financing policies or overall impacts from wider societal changes such as New Public Management. The latter, NPM, is often brought up as a factor that influenced overall State policies like “steering-at-a-distance” or internal shifts towards “managerialism”. Yet, to the authors’ knowledge, there is not study that specifically maps how the Quebec State formally influences internal governance. In addition, most studies about the Quebec university system are not comparative in nature. This paper presents a portion of the results produced by a 2022- 2023 study that aims at filling these last two gaps in knowledge. Building on existing governmental, institutional, and scientific papers, we documented the legal and regulatory framework of the Quebec university system and of twenty-one other university systems in North America and Europe (2 in Canada, 2 in the USA, 16 in Europe, with the addition of the European Union as a distinct case). This allowed us to map the presence (or absence) of mandatory structures of governance enforced by States, as well as their composition. Then, using Clark’s “triangle of coordination”, we analyzed each system to assess the relative influences of the market, the State and the collegium upon the governance model put in place. Finally, we compared all 21 non-Quebec systems to characterize the province’s policies in an internal perspective. Preliminary findings are twofold. First, when all systems are placed on a continuum ranging from “no State interference in internal governance” to “State-run universities”, Quebec comes in the middle of the pack, albeit with a slight lean towards institutional freedom. When it comes to overall governance (like Boards and Senates), the dual nature of the Quebec system, with its public university and its coopted yet historically private (or ecclesiastic) institutions, in fact mimics the duality of all university systems. Second, however, is the sheer abundance of legal and regulatory mandates from the State that, while not expressly addressing internal governance, seems to require de facto modification of internal governance structure and dynamics to ensure institutional conformity with said mandates. This study is only a fraction of the research that is needed to better understand State-universities interactions regarding governance. We hope it will set the stage for future studies.Keywords: internal governance, legislation, Quebec, universities
Procedia PDF Downloads 8653 Destructive and Nondestructive Characterization of Advanced High Strength Steels DP1000/1200
Authors: Carla M. Machado, André A. Silva, Armando Bastos, Telmo G. Santos, J. Pamies Teixeira
Abstract:
Advanced high-strength steels (AHSS) are increasingly being used in automotive components. The use of AHSS sheets plays an important role in reducing weight, as well as increasing the resistance to impact in vehicle components. However, the large-scale use of these sheets becomes more difficult due to the limitations during the forming process. Such limitations are due to the elastically driven change of shape of a metal sheet during unloading and following forming, known as the springback effect. As the magnitude of the springback tends to increase with the strength of the material, it is among the most worrisome problems in the use of AHSS steels. The prediction of strain hardening, especially under non-proportional loading conditions, is very limited due to the lack of constitutive models and mainly due to very limited experimental tests. It is very clear from the literature that in experimental terms there is not much work to evaluate deformation behavior under real conditions, which implies a very limited and scarce development of mathematical models for these conditions. The Bauschinger effect is also fundamental to the difference between kinematic and isotropic hardening models used to predict springback in sheet metal forming. It is of major importance to deepen the phenomenological knowledge of the mechanical and microstructural behavior of the materials, in order to be able to reproduce with high fidelity the behavior of extension of the materials by means of computational simulation. For this, a multi phenomenological analysis and characterization are necessary to understand the various aspects involved in plastic deformation, namely the stress-strain relations and also the variations of electrical conductivity and magnetic permeability associated with the metallurgical changes due to plastic deformation. Aiming a complete mechanical-microstructural characterization, uniaxial tensile tests involving successive cycles of loading and unloading were performed, as well as biaxial tests such as the Erichsen test. Also, nondestructive evaluation comprising eddy currents to verify microstructural changes due to plastic deformation and ultrasonic tests to evaluate the local variations of thickness were made. The material parameters for the stable yield function and the monotonic strain hardening were obtained using uniaxial tension tests in different material directions and balanced biaxial tests. Both the decrease of the modulus of elasticity and Bauschinger effect were determined through the load-unload tensile tests. By means of the eddy currents tests, it was possible to verify changes in the magnetic permeability of the material according to the different plastically deformed areas. The ultrasonic tests were an important aid to quantify the local plastic extension. With these data, it is possible to parameterize the different models of kinematic hardening to better approximate the results obtained by simulation with the experimental results, which are fundamental for the springback prediction of the stamped parts.Keywords: advanced high strength steel, Bauschinger effect, sheet metal forming, springback
Procedia PDF Downloads 22952 Comparing Remote Sensing and in Situ Analyses of Test Wheat Plants as Means for Optimizing Data Collection in Precision Agriculture
Authors: Endalkachew Abebe Kebede, Bojin Bojinov, Andon Vasilev Andonov, Orhan Dengiz
Abstract:
Remote sensing has a potential application in assessing and monitoring the plants' biophysical properties using the spectral responses of plants and soils within the electromagnetic spectrum. However, only a few reports compare the performance of different remote sensing sensors against in-situ field spectral measurement. The current study assessed the potential applications of open data source satellite images (Sentinel 2 and Landsat 9) in estimating the biophysical properties of the wheat crop on a study farm found in the village of OvchaMogila. A Landsat 9 (30 m resolution) and Sentinel-2 (10 m resolution) satellite images with less than 10% cloud cover have been extracted from the open data sources for the period of December 2021 to April 2022. An Unmanned Aerial Vehicle (UAV) has been used to capture the spectral response of plant leaves. In addition, SpectraVue 710s Leaf Spectrometer was used to measure the spectral response of the crop in April at five different locations within the same field. The ten most common vegetation indices have been selected and calculated based on the reflectance wavelength range of remote sensing tools used. The soil samples have been collected in eight different locations within the farm plot. The different physicochemical properties of the soil (pH, texture, N, P₂O₅, and K₂O) have been analyzed in the laboratory. The finer resolution images from the UAV and the Leaf Spectrometer have been used to validate the satellite images. The performance of different sensors has been compared based on the measured leaf spectral response and the extracted vegetation indices using the five sampling points. A scatter plot with the coefficient of determination (R2) and Root Mean Square Error (RMSE) and the correlation (r) matrix prepared using the corr and heatmap python libraries have been used for comparing the performance of Sentinel 2 and Landsat 9 VIs compared to the drone and SpectraVue 710s spectrophotometer. The soil analysis revealed the study farm plot is slightly alkaline (8.4 to 8.52). The soil texture of the study farm is dominantly Clay and Clay Loam.The vegetation indices (VIs) increased linearly with the growth of the plant. Both the scatter plot and the correlation matrix showed that Sentinel 2 vegetation indices have a relatively better correlation with the vegetation indices of the Buteo dronecompared to the Landsat 9. The Landsat 9 vegetation indices somewhat align better with the leaf spectrometer. Generally, the Sentinel 2 showed a better performance than the Landsat 9. Further study with enough field spectral sampling and repeated UAV imaging is required to improve the quality of the current study.Keywords: landsat 9, leaf spectrometer, sentinel 2, UAV
Procedia PDF Downloads 11051 Boussinesq Model for Dam-Break Flow Analysis
Authors: Najibullah M, Soumendra Nath Kuiry
Abstract:
Dams and reservoirs are perceived for their estimable alms to irrigation, water supply, flood control, electricity generation, etc. which civilize the prosperity and wealth of society across the world. Meantime the dam breach could cause devastating flood that can threat to the human lives and properties. Failures of large dams remain fortunately very seldom events. Nevertheless, a number of occurrences have been recorded in the world, corresponding in an average to one to two failures worldwide every year. Some of those accidents have caused catastrophic consequences. So it is decisive to predict the dam break flow for emergency planning and preparedness, as it poses high risk to life and property. To mitigate the adverse impact of dam break, modeling is necessary to gain a good understanding of the temporal and spatial evolution of the dam-break floods. This study will mainly deal with one-dimensional (1D) dam break modeling. Less commonly used in the hydraulic research community, another possible option for modeling the rapidly varied dam-break flows is the extended Boussinesq equations (BEs), which can describe the dynamics of short waves with a reasonable accuracy. Unlike the Shallow Water Equations (SWEs), the BEs taken into account the wave dispersion and non-hydrostatic pressure distribution. To capture the dam-break oscillations accurately it is very much needed of at least fourth-order accurate numerical scheme to discretize the third-order dispersion terms present in the extended BEs. The scope of this work is therefore to develop an 1D fourth-order accurate in both space and time Boussinesq model for dam-break flow analysis by using finite-volume / finite difference scheme. The spatial discretization of the flux and dispersion terms achieved through a combination of finite-volume and finite difference approximations. The flux term, was solved using a finite-volume discretization whereas the bed source and dispersion term, were discretized using centered finite-difference scheme. Time integration achieved in two stages, namely the third-order Adams Basforth predictor stage and the fourth-order Adams Moulton corrector stage. Implementation of the 1D Boussinesq model done using PYTHON 2.7.5. Evaluation of the performance of the developed model predicted as compared with the volume of fluid (VOF) based commercial model ANSYS-CFX. The developed model is used to analyze the risk of cascading dam failures similar to the Panshet dam failure in 1961 that took place in Pune, India. Nevertheless, this model can be used to predict wave overtopping accurately compared to shallow water models for designing coastal protection structures.Keywords: Boussinesq equation, Coastal protection, Dam-break flow, One-dimensional model
Procedia PDF Downloads 23550 Construction and Analysis of Tamazight (Berber) Text Corpus
Authors: Zayd Khayi
Abstract:
This paper deals with the construction and analysis of the Tamazight text corpus. The grammatical structure of the Tamazight remains poorly understood, and a lack of comparative grammar leads to linguistic issues. In order to fill this gap, even though it is small, by constructed the diachronic corpus of the Tamazight language, and elaborated the program tool. In addition, this work is devoted to constructing that tool to analyze the different aspects of the Tamazight, with its different dialects used in the north of Africa, specifically in Morocco. It also focused on three Moroccan dialects: Tamazight, Tarifiyt, and Tachlhit. The Latin version was good choice because of the many sources it has. The corpus is based on the grammatical parameters and features of that language. The text collection contains more than 500 texts that cover a long historical period. It is free, and it will be useful for further investigations. The texts were transformed into an XML-format standardization goal. The corpus counts more than 200,000 words. Based on the linguistic rules and statistical methods, the original user interface and software prototype were developed by combining the technologies of web design and Python. The corpus presents more details and features about how this corpus provides users with the ability to distinguish easily between feminine/masculine nouns and verbs. The interface used has three languages: TMZ, FR, and EN. Selected texts were not initially categorized. This work was done in a manual way. Within corpus linguistics, there is currently no commonly accepted approach to the classification of texts. Texts are distinguished into ten categories. To describe and represent the texts in the corpus, we elaborated the XML structure according to the TEI recommendations. Using the search function may provide us with the types of words we would search for, like feminine/masculine nouns and verbs. Nouns are divided into two parts. The gender in the corpus has two forms. The neutral form of the word corresponds to masculine, while feminine is indicated by a double t-t affix (the prefix t- and the suffix -t), ex: Tarbat (girl), Tamtut (woman), Taxamt (tent), and Tislit (bride). However, there are some words whose feminine form contains only the prefix t- and the suffix –a, ex: Tasa (liver), tawja (family), and tarwa (progenitors). Generally, Tamazight masculine words have prefixes that distinguish them from other words. For instance, 'a', 'u', 'i', ex: Asklu (tree), udi (cheese), ighef (head). Verbs in the corpus are for the first person singular and plural that have suffixes 'agh','ex', 'egh', ex: 'ghrex' (I study), 'fegh' (I go out), 'nadagh' (I call). The program tool permits the following characteristics of this corpus: list of all tokens; list of unique words; lexical diversity; realize different grammatical requests. To conclude, this corpus has only focused on a small group of parts of speech in Tamazight language verbs, nouns. Work is still on the adjectives, prounouns, adverbs and others.Keywords: Tamazight (Berber) language, corpus linguistic, grammar rules, statistical methods
Procedia PDF Downloads 7149 Financial Modeling for Net Present Benefit Analysis of Electric Bus and Diesel Bus and Applications to NYC, LA, and Chicago
Authors: Jollen Dai, Truman You, Xinyun Du, Katrina Liu
Abstract:
Transportation is one of the leading sources of greenhouse gas emissions (GHG). Thus, to meet the Paris Agreement 2015, all countries must adopt a different and more sustainable transportation system. From bikes to Maglev, the world is slowly shifting to sustainable transportation. To develop a utility public transit system, a sustainable web of buses must be implemented. As of now, only a handful of cities have adopted a detailed plan to implement a full fleet of e-buses by the 2030s, with Shenzhen in the lead. Every change requires a detailed plan and a focused analysis of the impacts of the change. In this report, the economic implications and financial implications have been taken into consideration to develop a well-rounded 10-year plan for New York City. We also apply the same financial model to the other cities, LA and Chicago. We picked NYC, Chicago, and LA to conduct the comparative NPB analysis since they are all big metropolitan cities and have complex transportation systems. All three cities have started an action plan to achieve a full fleet of e-bus in the decades. Plus, their energy carbon footprint and their energy price are very different, which are the key factors to the benefits of electric buses. Using TCO (Total Cost Ownership) financial analysis, we developed a model to calculate NPB (Net Present Benefit) /and compare EBS (electric buses) to DBS (diesel buses). We have considered all essential aspects in our model: initial investment, including the cost of a bus, charger, and installation, government fund (federal, state, local), labor cost, energy (electricity or diesel) cost, maintenance cost, insurance cost, health and environment benefit, and V2G (vehicle to grid) benefit. We see about $1,400,000 in benefits for a 12-year lifetime of an EBS compared to DBS provided the government fund to offset 50% of EBS purchase cost. With the government subsidy, an EBS starts to make positive cash flow in 5th year and can pay back its investment in 5 years. Please remember that in our model, we consider environmental and health benefits, and every year, $50,000 is counted as health benefits per bus. Besides health benefits, the significant benefits come from the energy cost savings and maintenance savings, which are about $600,000 and $200,000 in 12-year life cycle. Using linear regression, given certain budget limitations, we then designed an optimal three-phase process to replace all NYC electric buses in 10 years, i.e., by 2033. The linear regression process is to minimize the total cost over the years and have the lowest environmental cost. The overall benefits to replace all DBS with EBS for NYC is over $2.1 billion by the year of 2033. For LA, and Chicago, the benefits for electrification of the current bus fleet are $1.04 billion and $634 million by 2033. All NPB analyses and the algorithm to optimize the electrification phase process are implemented in Python code and can be shared.Keywords: financial modeling, total cost ownership, net present benefits, electric bus, diesel bus, NYC, LA, Chicago
Procedia PDF Downloads 5448 Analysis of the Evolution of Techniques and Review in Cleft Surgery
Authors: Tomaz Oliveira, Rui Medeiros, André Lacerda
Abstract:
Introduction: Cleft lip and/or palate are the most frequent forms of congenital craniofacial anomalies, affecting mainly the middle third of the face and manifesting by functional and aesthetic changes. Bilateral cleft lip represents a reconstructive surgical challenge, not only for the labial component but also for the associated nasal deformation. Recently, the paradigm of the approach to this pathology has changed, placing the focus on muscle reconstruction and anatomical repositioning of the nasal cartilages in order to obtain the best aesthetic and functional results. The aim of this study is to carry out a systematic review of the surgical approach to bilateral cleft lip, retrospectively analyzing the case series of Plastic Surgery Service at Hospital Santa Maria (Lisbon, Portugal) regarding this pathology, the global assessment of the characteristics of the operated patients and the study of the different surgical approaches and their complications in the last 20 years. Methods: The present work demonstrates a retrospective and descriptive study of patients who underwent at least one reconstructive surgery for cleft lip and/or palate, in the CPRE service of the HSM, in the period between January 1 of 1997 and December 31 of 2017, in which the data relating to 361 individuals were analyzed who, after applying the exclusion criteria, constituted a sample of 212 participants. The variables analyzed were the year of the first surgery, gender, age, type of orofacial cleft, surgical approach, and its complications. Results: There was a higher overall prevalence in males, with cleft lip and cleft palate occurring in greater proportion in males, with the cleft palate being more common in females. The most frequently recorded malformation was cleft lip and palate, which is complete in most cases. Regarding laterality, alterations with a unilateral labial component were the most commonly observed, with the left lip being described as the most affected. It was found that the vast majority of patients underwent primary intervention up to 12 months of age. The surgical techniques used in the approach to this pathology showed an important chronological variation over the years. Discussion: Cleft lip and/or palate is a medical condition associated with high aesthetic and functional morbidity, which requires early treatment in order to optimize the long-term outcome. The existence of a nasolabial component and its respective surgical correction plays a central role in the treatment of this pathology. The high rates of post-surgical complications and unconvincing aesthetic results have motivated an evolution of the surgical technique, increasingly evident in recent years, allowing today to achieve satisfactory aesthetic results, even in bilateral cleft lip with high deformation complexity. The introduction of techniques that favor nasolabial reconstruction based on anatomical principles has been producing increasingly convincing results. The analyzed sample shows that most of the results obtained in this study are, in general, compatible with the results published in the literature. Conclusion: This work showed that the existence of small variations in the surgical technique can bring significant improvements in the functional and aesthetic results in the treatment of bilateral cleft lip.Keywords: cleft lip, palate lip, congenital abnormalities, cranofacial malformations
Procedia PDF Downloads 11447 Use of End-Of-Life Footwear Polymer EVA (Ethylene Vinyl Acetate) and PU (Polyurethane) for Bitumen Modification
Authors: Lucas Nascimento, Ana Rita, Margarida Soares, André Ribeiro, Zlatina Genisheva, Hugo Silva, Joana Carvalho
Abstract:
The footwear industry is an essential fashion industry, focusing on producing various types of footwear, such as shoes, boots, sandals, sneakers, and slippers. Global footwear consumption has doubled every 20 years since the 1950s. It is estimated that in 1950, each person consumed one new pair of shoes yearly; by 2005, over 20 billion pairs of shoes were consumed. To meet global footwear demand, production reached $24.2 billion, equivalent to about $74 per person in the United States. This means three new pairs of shoes per person worldwide. The issue of footwear waste is related to the fact that shoe production can generate a large amount of waste, much of which is difficult to recycle or reuse. This waste includes scraps of leather, fabric, rubber, plastics, toxic chemicals, and other materials. The search for alternative solutions for waste treatment and valorization is increasingly relevant in the current context, mainly when focused on utilizing waste as a source of substitute materials. From the perspective of the new circular economy paradigm, this approach is of utmost importance as it aims to preserve natural resources and minimize the environmental impact associated with sending waste to landfills. In this sense, the incorporation of waste into industrial sectors that allow for the recovery of large volumes, such as road construction, becomes an urgent and necessary solution from an environmental standpoint. This study explores the use of plastic waste from the footwear industry as a substitute for virgin polymers in bitumen modification, a solution that presents a more sustainable future. Replacing conventional polymers with plastic waste in asphalt composition reduces the amount of waste sent to landfills and offers an opportunity to extend the lifespan of road infrastructures. By incorporating waste into construction materials, reducing the consumption of natural resources and the emission of pollutants is possible, promoting a more circular and efficient economy. In the initial phase of this study, waste materials from end-of-life footwear were selected, and plastic waste with the highest potential for application was separated. Based on a literature review, EVA (ethylene vinyl acetate) and PU (polyurethane) were identified as the polymers suitable for modifying 50/70 classification bitumen. Each polymer was analysed at concentrations of 3% and 5%. The production process involved the polymer's fragmentation to a size of 4 millimetres after heating the materials to 180 ºC and mixing for 10 minutes at low speed. After was mixed for 30 minutes in a high-speed mixer. The tests included penetration, softening point, viscosity, and rheological assessments. With the results obtained from the tests, the mixtures with EVA demonstrated better results than those with PU, as EVA had more resistance to temperature, a better viscosity curve and a greater elastic recovery in rheology.Keywords: footwear waste, hot asphalt pavement, modified bitumen, polymers
Procedia PDF Downloads 2146 Welfare and Sustainability in Beef Cattle Production on Tropical Pasture
Authors: Andre Pastori D'Aurea, Lauriston Bertelli Feranades, Luis Eduardo Ferreira, Leandro Dias Pinto, Fabiana Ayumi Shiozaki
Abstract:
The aim of this study was to improve the production of beef cattle on tropical pasture without harming this environment. On tropical pastures, cattle's live weight gain is lower than feedlot, and forage production is seasonable, changing from season to season. Thus, concerned with sustainable livestock production, the Premix Company has developed strategies to improve the production of beef cattle on tropical pasture to ensure sustainability of welfare and production. There are two important principles in this productivity system: 1) increase individual gains with use of better supplementation and 2) increase the productivity units with better forage quality like corn silage or other forms of forage conservations, actually used only in winter, and adding natural additives in the diet. This production system was applied from June 2017 to May 2018 in the Research Center of Premix Company, Patrocínio Paulista, São Paulo State, Brazil. The area used had 9 hectares of pasture of Brachiaria brizantha. 36 steers Nellore were evaluated for one year. The initial weight was 253 kg. The parameters used were daily average gain and gain per area. This indicated the corrections to be made and helped design future fertilization. In this case, we fertilized the pasture with 30 kg of nitrogen per animal divided into two parts. The diet was pasture and protein-energy supplements (0.4% of live weight). The supplement used was added with natural additive Fator P® – Premix Company). Fator P® is an additive composed by amino acids (lysine, methionine and tyrosine, 16400, 2980 and 3000 mg.kg-1 respectively), minerals, probiotics (Saccharomyces cerevisiae, 7 x 10E8 CFU.kg-1) and essential fatty acids (linoleic and oleic acids, 108.9 and 99g.kg-1 respectively). Due to seasonal changes, in the winter we supplemented the diet by increasing the offer of forage, supplementing with maize silage. It was offered 1% of live weight in silage corn and 0.4% of the live weight in protein-energetic supplements with additive Fator P ®. At the end of the period, the productivity was calculated by summing the individual gains for the area used. The average daily gain of the animals were 693 grams per day and was produced 1.005 kg /hectare/year. This production is about 8 times higher than the average of Brazilian meat national production. To succeed in this project, it is necessary to increase the gains per area, so it is necessary to increase the capacity per area. Pasture management is very important to the project's success because the dietary decisions were taken from the quantity and quality of the forage. We, therefore, recommend the use of animals in the growth phase because the response to supplementation is greater in that phase and we can allocate more animals per area. This system's carbon footprint reduces emissions by 61.2 percent compared to the Brazilian average. This beef cattle production system can be efficient and environmentally friendly to the natural. Another point is that bovines will benefit from their natural environment without competing or having an impact on human food production.Keywords: cattle production, environment, pasture, sustainability
Procedia PDF Downloads 15345 South African Multiple Deprivation-Concentration Index Quantiles Differentiated by Components of Success and Impediment to Tuberculosis Control Programme Using Mathematical Modelling in Rural O. R. Tambo District Health Facilities
Authors: Ntandazo Dlatu, Benjamin Longo-Mbenza, Andre Renzaho, Ruffin Appalata, Yolande Yvonne Valeria Matoumona Mavoungou, Mbenza Ben Longo, Kenneth Ekoru, Blaise Makoso, Gedeon Longo Longo
Abstract:
Background: The gap between complexities related to the integration of Tuberculosis /HIV control and evidence-based knowledge motivated the initiation of the study. Therefore, the objective of this study was to explore correlations between national TB management guidelines, multiple deprivation indexes, quantiles, components and levels of Tuberculosis control programme using mathematical modeling in rural O.R. Tambo District Health Facilities, South Africa. Methods: The study design used mixed secondary data analysis and cross-sectional analysis between 2009 and 2013 across O.R Tambo District, Eastern Cape, South Africa using univariate/ bivariate analysis, linear multiple regression models, and multivariate discriminant analysis. Health inequalities indicators and component of an impediment to the tuberculosis control programme were evaluated. Results: In total, 62 400 records for TB notification were analyzed for the period 2009-2013. There was a significant but negative between Financial Year Expenditure (r= -0.894; P= 0.041) Seropositive HIV status(r= -0.979; P= 0.004), Population Density (r = -0.881; P= 0.048) and the number of TB defaulter in all TB cases. It was shown unsuccessful control of TB management program through correlations between numbers of new PTB smear positive, TB defaulter new smear-positive, TB failure all TB, Pulmonary Tuberculosis case finding index and deprivation-concentration-dispersion index. It was shown successful TB program control through significant and negative associations between declining numbers of death in co-infection of HIV and TB, TB deaths all TB and SMIAD gradient/ deprivation-concentration-dispersion index. The multivariate linear model was summarized by unadjusted r of 96%, adjusted R2 of 95 %, Standard Error of estimate of 0.110, R2 changed of 0.959 and significance for variance change for P=0.004 to explain the prediction of TB defaulter in all TB with equation y= 8.558-0.979 x number of HIV seropositive. After adjusting for confounding factors (PTB case finding the index, TB defaulter new smear-positive, TB death in all TB, TB defaulter all TB, and TB failure in all TB). The HIV and TB death, as well as new PTB smear positive, were identified as the most important, significant, and independent indicator to discriminate most deprived deprivation index far from other deprivation quintiles 2-5 using discriminant analysis. Conclusion: Elimination of poverty such as overcrowding, lack of sanitation and environment of highest burden of HIV might end the TB threat in O.R Tambo District, Eastern Cape, South Africa. Furthermore, ongoing adequate budget comprehensive, holistic and collaborative initiative towards Sustainable Developmental Goals (SDGs) is necessary for complete elimination of TB in poor O.R Tambo District.Keywords: tuberculosis, HIV/AIDS, success, failure, control program, health inequalities, South Africa
Procedia PDF Downloads 17544 Strength Evaluation by Finite Element Analysis of Mesoscale Concrete Models Developed from CT Scan Images of Concrete Cube
Authors: Nirjhar Dhang, S. Vinay Kumar
Abstract:
Concrete is a non-homogeneous mix of coarse aggregates, sand, cement, air-voids and interfacial transition zone (ITZ) around aggregates. Adoption of these complex structures and material properties in numerical simulation would lead us to better understanding and design of concrete. In this work, the mesoscale model of concrete has been prepared from X-ray computerized tomography (CT) image. These images are converted into computer model and numerically simulated using commercially available finite element software. The mesoscale models are simulated under the influence of compressive displacement. The effect of shape and distribution of aggregates, continuous and discrete ITZ thickness, voids, and variation of mortar strength has been investigated. The CT scan of concrete cube consists of series of two dimensional slices. Total 49 slices are obtained from a cube of 150mm and the interval of slices comes approximately 3mm. In CT scan images, the same cube can be CT scanned in a non-destructive manner and later the compression test can be carried out in a universal testing machine (UTM) for finding its strength. The image processing and extraction of mortar and aggregates from CT scan slices are performed by programming in Python. The digital colour image consists of red, green and blue (RGB) pixels. The conversion of RGB image to black and white image (BW) is carried out, and identification of mesoscale constituents is made by putting value between 0-255. The pixel matrix is created for modeling of mortar, aggregates, and ITZ. Pixels are normalized to 0-9 scale considering the relative strength. Here, zero is assigned to voids, 4-6 for mortar and 7-9 for aggregates. The value between 1-3 identifies boundary between aggregates and mortar. In the next step, triangular and quadrilateral elements for plane stress and plane strain models are generated depending on option given. Properties of materials, boundary conditions, and analysis scheme are specified in this module. The responses like displacement, stresses, and damages are evaluated by ABAQUS importing the input file. This simulation evaluates compressive strengths of 49 slices of the cube. The model is meshed with more than sixty thousand elements. The effect of shape and distribution of aggregates, inclusion of voids and variation of thickness of ITZ layer with relation to load carrying capacity, stress-strain response and strain localizations of concrete have been studied. The plane strain condition carried more load than plane stress condition due to confinement. The CT scan technique can be used to get slices from concrete cores taken from the actual structure, and the digital image processing can be used for finding the shape and contents of aggregates in concrete. This may be further compared with test results of concrete cores and can be used as an important tool for strength evaluation of concrete.Keywords: concrete, image processing, plane strain, interfacial transition zone
Procedia PDF Downloads 24343 Predicting Susceptibility to Coronary Artery Disease using Single Nucleotide Polymorphisms with a Large-Scale Data Extraction from PubMed and Validation in an Asian Population Subset
Authors: K. H. Reeta, Bhavana Prasher, Mitali Mukerji, Dhwani Dholakia, Sangeeta Khanna, Archana Vats, Shivam Pandey, Sandeep Seth, Subir Kumar Maulik
Abstract:
Introduction Research has demonstrated a connection between coronary artery disease (CAD) and genetics. We did a deep literature mining using both bioinformatics and manual efforts to identify the susceptible polymorphisms in coronary artery disease. Further, the study sought to validate these findings in an Asian population. Methodology In first phase, we used an automated pipeline which organizes and presents structured information on SNPs, Population and Diseases. The information was obtained by applying Natural Language Processing (NLP) techniques to approximately 28 million PubMed abstracts. To accomplish this, we utilized Python scripts to extract and curate disease-related data, filter out false positives, and categorize them into 24 hierarchical groups using named Entity Recognition (NER) algorithms. From the extensive research conducted, a total of 466 unique PubMed Identifiers (PMIDs) and 694 Single Nucleotide Polymorphisms (SNPs) related to coronary artery disease (CAD) were identified. To refine the selection process, a thorough manual examination of all the studies was carried out. Specifically, SNPs that demonstrated susceptibility to CAD and exhibited a positive Odds Ratio (OR) were selected, and a final pool of 324 SNPs was compiled. The next phase involved validating the identified SNPs in DNA samples of 96 CAD patients and 37 healthy controls from Indian population using Global Screening Array. ResultsThe results exhibited out of 324, only 108 SNPs were expressed, further 4 SNPs showed significant difference of minor allele frequency in cases and controls. These were rs187238 of IL-18 gene, rs731236 of VDR gene, rs11556218 of IL16 gene and rs5882 of CETP gene. Prior researches have reported association of these SNPs with various pathways like endothelial damage, susceptibility of vitamin D receptor (VDR) polymorphisms, and reduction of HDL-cholesterol levels, ultimately leading to the development of CAD. Among these, only rs731236 had been studied in Indian population and that too in diabetes and vitamin D deficiency. For the first time, these SNPs were reported to be associated with CAD in Indian population. Conclusion: This pool of 324 SNP s is a unique kind of resource that can help to uncover risk associations in CAD. Here, we validated in Indian population. Further, validation in different populations may offer valuable insights and contribute to the development of a screening tool and may help in enabling the implementation of primary prevention strategies targeted at the vulnerable population.Keywords: coronary artery disease, single nucleotide polymorphism, susceptible SNP, bioinformatics
Procedia PDF Downloads 7742 Automatic Identification of Pectoral Muscle
Authors: Ana L. M. Pavan, Guilherme Giacomini, Allan F. F. Alves, Marcela De Oliveira, Fernando A. B. Neto, Maria E. D. Rosa, Andre P. Trindade, Diana R. De Pina
Abstract:
Mammography is a worldwide image modality used to diagnose breast cancer, even in asymptomatic women. Due to its large availability, mammograms can be used to measure breast density and to predict cancer development. Women with increased mammographic density have a four- to sixfold increase in their risk of developing breast cancer. Therefore, studies have been made to accurately quantify mammographic breast density. In clinical routine, radiologists perform image evaluations through BIRADS (Breast Imaging Reporting and Data System) assessment. However, this method has inter and intraindividual variability. An automatic objective method to measure breast density could relieve radiologist’s workload by providing a first aid opinion. However, pectoral muscle is a high density tissue, with similar characteristics of fibroglandular tissues. It is consequently hard to automatically quantify mammographic breast density. Therefore, a pre-processing is needed to segment the pectoral muscle which may erroneously be quantified as fibroglandular tissue. The aim of this work was to develop an automatic algorithm to segment and extract pectoral muscle in digital mammograms. The database consisted of thirty medio-lateral oblique incidence digital mammography from São Paulo Medical School. This study was developed with ethical approval from the authors’ institutions and national review panels under protocol number 3720-2010. An algorithm was developed, in Matlab® platform, for the pre-processing of images. The algorithm uses image processing tools to automatically segment and extract the pectoral muscle of mammograms. Firstly, it was applied thresholding technique to remove non-biological information from image. Then, the Hough transform is applied, to find the limit of the pectoral muscle, followed by active contour method. Seed of active contour is applied in the limit of pectoral muscle found by Hough transform. An experienced radiologist also manually performed the pectoral muscle segmentation. Both methods, manual and automatic, were compared using the Jaccard index and Bland-Altman statistics. The comparison between manual and the developed automatic method presented a Jaccard similarity coefficient greater than 90% for all analyzed images, showing the efficiency and accuracy of segmentation of the proposed method. The Bland-Altman statistics compared both methods in relation to area (mm²) of segmented pectoral muscle. The statistic showed data within the 95% confidence interval, enhancing the accuracy of segmentation compared to the manual method. Thus, the method proved to be accurate and robust, segmenting rapidly and freely from intra and inter-observer variability. It is concluded that the proposed method may be used reliably to segment pectoral muscle in digital mammography in clinical routine. The segmentation of the pectoral muscle is very important for further quantifications of fibroglandular tissue volume present in the breast.Keywords: active contour, fibroglandular tissue, hough transform, pectoral muscle
Procedia PDF Downloads 35441 Virtual Experiments on Coarse-Grained Soil Using X-Ray CT and Finite Element Analysis
Authors: Mohamed Ali Abdennadher
Abstract:
Digital rock physics, an emerging field leveraging advanced imaging and numerical techniques, offers a promising approach to investigating the mechanical properties of granular materials without extensive physical experiments. This study focuses on using X-Ray Computed Tomography (CT) to capture the three-dimensional (3D) structure of coarse-grained soil at the particle level, combined with finite element analysis (FEA) to simulate the soil's behavior under compression. The primary goal is to establish a reliable virtual testing framework that can replicate laboratory results and offer deeper insights into soil mechanics. The methodology involves acquiring high-resolution CT scans of coarse-grained soil samples to visualize internal particle morphology. These CT images undergo processing through noise reduction, thresholding, and watershed segmentation techniques to isolate individual particles, preparing the data for subsequent analysis. A custom Python script is employed to extract particle shapes and conduct a statistical analysis of particle size distribution. The processed particle data then serves as the basis for creating a finite element model comprising approximately 500 particles subjected to one-dimensional compression. The FEA simulations explore the effects of mesh refinement and friction coefficient on stress distribution at grain contacts. A multi-layer meshing strategy is applied, featuring finer meshes at inter-particle contacts to accurately capture mechanical interactions and coarser meshes within particle interiors to optimize computational efficiency. Despite the known challenges in parallelizing FEA to high core counts, this study demonstrates that an appropriate domain-level parallelization strategy can achieve significant scalability, allowing simulations to extend to very high core counts. The results show a strong correlation between the finite element simulations and laboratory compression test data, validating the effectiveness of the virtual experiment approach. Detailed stress distribution patterns reveal that soil compression behavior is significantly influenced by frictional interactions, with frictional sliding, rotation, and rolling at inter-particle contacts being the primary deformation modes under low to intermediate confining pressures. These findings highlight that CT data analysis combined with numerical simulations offers a robust method for approximating soil behavior, potentially reducing the need for physical laboratory experiments.Keywords: X-Ray computed tomography, finite element analysis, soil compression behavior, particle morphology
Procedia PDF Downloads 4240 Agent-Based Modeling Investigating Self-Organization in Open, Non-equilibrium Thermodynamic Systems
Authors: Georgi Y. Georgiev, Matthew Brouillet
Abstract:
This research applies the power of agent-based modeling to a pivotal question at the intersection of biology, computer science, physics, and complex systems theory about the self-organization processes in open, complex, non-equilibrium thermodynamic systems. Central to this investigation is the principle of Maximum Entropy Production (MEP). This principle suggests that such systems evolve toward states that optimize entropy production, leading to the formation of structured environments. It is hypothesized that guided by the least action principle, open thermodynamic systems identify and follow the shortest paths to transmit energy and matter, resulting in maximal entropy production, internal structure formation, and a decrease in internal entropy. Concurrently, it is predicted that there will be an increase in system information as more information is required to describe the developing structure. To test this, an agent-based model is developed simulating an ant colony's formation of a path between a food source and its nest. Utilizing the Netlogo software for modeling and Python for data analysis and visualization, self-organization is quantified by calculating the decrease in system entropy based on the potential states and distribution of the ants within the simulated environment. External entropy production is also evaluated for information increase and efficiency improvements in the system's action. Simulations demonstrated that the system begins at maximal entropy, which decreases as the ants form paths over time. A range of system behaviors contingent upon the number of ants are observed. Notably, no path formation occurred with fewer than five ants, whereas clear paths were established by 200 ants, and saturation of path formation and entropy state was reached at populations exceeding 1000 ants. This analytical approach identified the inflection point marking the transition from disorder to order and computed the slope at this point. Combined with extrapolation to the final path entropy, these parameters yield important insights into the eventual entropy state of the system and the timeframe for its establishment, enabling the estimation of the self-organization rate. This study provides a novel perspective on the exploration of self-organization in thermodynamic systems, establishing a correlation between internal entropy decrease rate and external entropy production rate. Moreover, it presents a flexible framework for assessing the impact of external factors like changes in world size, path obstacles, and friction. Overall, this research offers a robust, replicable model for studying self-organization processes in any open thermodynamic system. As such, it provides a foundation for further in-depth exploration of the complex behaviors of these systems and contributes to the development of more efficient self-organizing systems across various scientific fields.Keywords: complexity, self-organization, agent based modelling, efficiency
Procedia PDF Downloads 7039 Improving the Biomechanical Resistance of a Treated Tooth via Composite Restorations Using Optimised Cavity Geometries
Authors: Behzad Babaei, B. Gangadhara Prusty
Abstract:
The objective of this study is to assess the hypotheses that a restored tooth with a class II occlusal-distal (OD) cavity can be strengthened by designing an optimized cavity geometry, as well as selecting the composite restoration with optimized elastic moduli when there is a sharp de-bonded edge at the interface of the tooth and restoration. Methods: A scanned human maxillary molar tooth was segmented into dentine and enamel parts. The dentine and enamel profiles were extracted and imported into a finite element (FE) software. The enamel rod orientations were estimated virtually. Fifteen models for the restored tooth with different cavity occlusal depths (1.5, 2, and 2.5 mm) and internal cavity angles were generated. By using a semi-circular stone part, a 400 N load was applied to two contact points of the restored tooth model. The junctions between the enamel, dentine, and restoration were considered perfectly bonded. All parts in the model were considered homogeneous, isotropic, and elastic. The quadrilateral and triangular elements were employed in the models. A mesh convergence analysis was conducted to verify that the element numbers did not influence the simulation results. According to the criteria of a 5% error in the stress, we found that a total element number of over 14,000 elements resulted in the convergence of the stress. A Python script was employed to automatically assign 2-22 GPa moduli (with increments of 4 GPa) for the composite restorations, 18.6 GPa to the dentine, and two different elastic moduli to the enamel (72 GPa in the enamel rods’ direction and 63 GPa in perpendicular one). The linear, homogeneous, and elastic material models were considered for the dentine, enamel, and composite restorations. 108 FEA simulations were successively conducted. Results: The internal cavity angles (α) significantly altered the peak maximum principal stress at the interface of the enamel and restoration. The strongest structures against the contact loads were observed in the models with α = 100° and 105. Even when the enamel rods’ directional mechanical properties were disregarded, interestingly, the models with α = 100° and 105° exhibited the highest resistance against the mechanical loads. Regarding the effect of occlusal cavity depth, the models with 1.5 mm depth showed higher resistance to contact loads than the model with thicker cavities (2.0 and 2.5 mm). Moreover, the composite moduli in the range of 10-18 GPa alleviated the stress levels in the enamel. Significance: For the class II OD cavity models in this study, the optimal geometries, composite properties, and occlusal cavity depths were determined. Designing the cavities with α ≥100 ̊ was significantly effective in minimizing peak stress levels. The composite restoration with optimized properties reduced the stress concentrations on critical points of the models. Additionally, when more enamel was preserved, the sturdier enamel-restoration interface against the mechanical loads was observed.Keywords: dental composite restoration, cavity geometry, finite element approach, maximum principal stress
Procedia PDF Downloads 10638 Quantitative, Preservative Methodology for Review of Interview Transcripts Using Natural Language Processing
Authors: Rowan P. Martnishn
Abstract:
During the execution of a National Endowment of the Arts grant, approximately 55 interviews were collected from professionals across various fields. These interviews were used to create deliverables – historical connections for creations that began as art and evolved entirely into computing technology. With dozens of hours’ worth of transcripts to be analyzed by qualitative coders, a quantitative methodology was created to sift through the documents. The initial step was to both clean and format all the data. First, a basic spelling and grammar check was applied, as well as a Python script for normalized formatting which used an open-source grammatical formatter to make the data as coherent as possible. 10 documents were randomly selected to manually review, where words often incorrectly translated during the transcription were recorded and replaced throughout all other documents. Then, to remove all banter and side comments, the transcripts were spliced into paragraphs (separated by change in speaker) and all paragraphs with less than 300 characters were removed. Secondly, a keyword extractor, a form of natural language processing where significant words in a document are selected, was run on each paragraph for all interviews. Every proper noun was put into a data structure corresponding to that respective interview. From there, a Bidirectional and Auto-Regressive Transformer (B.A.R.T.) summary model was then applied to each paragraph that included any of the proper nouns selected from the interview. At this stage the information to review had been sent from about 60 hours’ worth of data to 20. The data was further processed through light, manual observation – any summaries which proved to fit the criteria of the proposed deliverable were selected, as well their locations within the document. This narrowed that data down to about 5 hours’ worth of processing. The qualitative researchers were then able to find 8 more connections in addition to our previous 4, exceeding our minimum quota of 3 to satisfy the grant. Major findings of the study and subsequent curation of this methodology raised a conceptual finding crucial to working with qualitative data of this magnitude. In the use of artificial intelligence there is a general trade off in a model between breadth of knowledge and specificity. If the model has too much knowledge, the user risks leaving out important data (too general). If the tool is too specific, it has not seen enough data to be useful. Thus, this methodology proposes a solution to this tradeoff. The data is never altered outside of grammatical and spelling checks. Instead, the important information is marked, creating an indicator of where the significant data is without compromising the purity of it. Secondly, the data is chunked into smaller paragraphs, giving specificity, and then cross-referenced with the keywords (allowing generalization over the whole document). This way, no data is harmed, and qualitative experts can go over the raw data instead of using highly manipulated results. Given the success in deliverable creation as well as the circumvention of this tradeoff, this methodology should stand as a model for synthesizing qualitative data while maintaining its original form.Keywords: B.A.R.T.model, keyword extractor, natural language processing, qualitative coding
Procedia PDF Downloads 34