Search results for: network user rules
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7709

Search results for: network user rules

839 Quantitative, Preservative Methodology for Review of Interview Transcripts Using Natural Language Processing

Authors: Rowan P. Martnishn

Abstract:

During the execution of a National Endowment of the Arts grant, approximately 55 interviews were collected from professionals across various fields. These interviews were used to create deliverables – historical connections for creations that began as art and evolved entirely into computing technology. With dozens of hours’ worth of transcripts to be analyzed by qualitative coders, a quantitative methodology was created to sift through the documents. The initial step was to both clean and format all the data. First, a basic spelling and grammar check was applied, as well as a Python script for normalized formatting which used an open-source grammatical formatter to make the data as coherent as possible. 10 documents were randomly selected to manually review, where words often incorrectly translated during the transcription were recorded and replaced throughout all other documents. Then, to remove all banter and side comments, the transcripts were spliced into paragraphs (separated by change in speaker) and all paragraphs with less than 300 characters were removed. Secondly, a keyword extractor, a form of natural language processing where significant words in a document are selected, was run on each paragraph for all interviews. Every proper noun was put into a data structure corresponding to that respective interview. From there, a Bidirectional and Auto-Regressive Transformer (B.A.R.T.) summary model was then applied to each paragraph that included any of the proper nouns selected from the interview. At this stage the information to review had been sent from about 60 hours’ worth of data to 20. The data was further processed through light, manual observation – any summaries which proved to fit the criteria of the proposed deliverable were selected, as well their locations within the document. This narrowed that data down to about 5 hours’ worth of processing. The qualitative researchers were then able to find 8 more connections in addition to our previous 4, exceeding our minimum quota of 3 to satisfy the grant. Major findings of the study and subsequent curation of this methodology raised a conceptual finding crucial to working with qualitative data of this magnitude. In the use of artificial intelligence there is a general trade off in a model between breadth of knowledge and specificity. If the model has too much knowledge, the user risks leaving out important data (too general). If the tool is too specific, it has not seen enough data to be useful. Thus, this methodology proposes a solution to this tradeoff. The data is never altered outside of grammatical and spelling checks. Instead, the important information is marked, creating an indicator of where the significant data is without compromising the purity of it. Secondly, the data is chunked into smaller paragraphs, giving specificity, and then cross-referenced with the keywords (allowing generalization over the whole document). This way, no data is harmed, and qualitative experts can go over the raw data instead of using highly manipulated results. Given the success in deliverable creation as well as the circumvention of this tradeoff, this methodology should stand as a model for synthesizing qualitative data while maintaining its original form.

Keywords: B.A.R.T.model, keyword extractor, natural language processing, qualitative coding

Procedia PDF Downloads 29
838 Design of a Standard Weather Data Acquisition Device for the Federal University of Technology, Akure Nigeria

Authors: Isaac Kayode Ogunlade

Abstract:

Data acquisition (DAQ) is the process by which physical phenomena from the real world are transformed into an electrical signal(s) that are measured and converted into a digital format for processing, analysis, and storage by a computer. The DAQ is designed using PIC18F4550 microcontroller, communicating with Personal Computer (PC) through USB (Universal Serial Bus). The research deployed initial knowledge of data acquisition system and embedded system to develop a weather data acquisition device using LM35 sensor to measure weather parameters and the use of Artificial Intelligence(Artificial Neural Network - ANN)and statistical approach(Autoregressive Integrated Moving Average – ARIMA) to predict precipitation (rainfall). The device is placed by a standard device in the Department of Meteorology, Federal University of Technology, Akure (FUTA) to know the performance evaluation of the device. Both devices (standard and designed) were subjected to 180 days with the same atmospheric condition for data mining (temperature, relative humidity, and pressure). The acquired data is trained in MATLAB R2012b environment using ANN, and ARIMAto predict precipitation (rainfall). Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Correction Square (R2), and Mean Percentage Error (MPE) was deplored as standardize evaluation to know the performance of the models in the prediction of precipitation. The results from the working of the developed device show that the device has an efficiency of 96% and is also compatible with Personal Computer (PC) and laptops. The simulation result for acquired data shows that ANN models precipitation (rainfall) prediction for two months (May and June 2017) revealed a disparity error of 1.59%; while ARIMA is 2.63%, respectively. The device will be useful in research, practical laboratories, and industrial environments.

Keywords: data acquisition system, design device, weather development, predict precipitation and (FUTA) standard device

Procedia PDF Downloads 92
837 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System

Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee

Abstract:

This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.

Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation

Procedia PDF Downloads 101
836 Understanding the Semantic Network of Tourism Studies in Taiwan by Using Bibliometrics Analysis

Authors: Chun-Min Lin, Yuh-Jen Wu, Ching-Ting Chung

Abstract:

The formulation of tourism policies requires objective academic research and evidence as support, especially research from local academia. Taiwan is a small island, and its economic growth relies heavily on tourism revenue. Taiwanese government has been devoting to the promotion of the tourism industry over the past few decades. Scientific research outcomes by Taiwanese scholars may and will help lay the foundations for drafting future tourism policy by the government. In this study, a total of 120 full journal articles published between 2008 and 2016 from the Journal of Tourism and Leisure Studies (JTSL) were examined to explore the scientific research trend of tourism study in Taiwan. JTSL is one of the most important Taiwanese journals in the tourism discipline which focuses on tourism-related issues and uses traditional Chinese as the study language. The method of co-word analysis from bibliometrics approaches was employed for semantic analysis in this study. When analyzing Chinese words and phrases, word segmentation analysis is a crucial step. It must be carried out initially and precisely in order to obtain meaningful word or word chunks for further frequency calculation. A word segmentation system basing on N-gram algorithm was developed in this study to conduct semantic analysis, and 100 groups of meaningful phrases with the highest recurrent rates were located. Subsequently, co-word analysis was employed for semantic classification. The results showed that the themes of tourism research in Taiwan in recent years cover the scope of tourism education, environmental protection, hotel management, information technology, and senior tourism. The results can give insight on the related issues and serve as a reference for tourism-related policy making and follow-up research.

Keywords: bibliometrics, co-word analysis, word segmentation, tourism research, policy

Procedia PDF Downloads 229
835 Interpersonal Competence Related to the Practice Learning of Occupational Therapy Students in Hong Kong

Authors: Lik Hang Gary Wong

Abstract:

Background: Practice learning is crucial for preparing the healthcare profession to meet the real challenge upon graduation. Students are required to demonstrate their competence in managing interpersonal challenges, such as teamwork with other professionals and communicating well with the service users, during the placement. Such competence precedes clinical practice, and it may eventually affect students' actual performance in a clinical context. Unfortunately, there were limited studies investigating how such competence affects students' performance in practice learning. Objectives: The aim of this study is to investigate how self-rated interpersonal competence affects students' actual performance during clinical placement. Methods: 40 occupational therapy students from Hong Kong were recruited in this study. Prior to the clinical placement (level two or above), they completed an online survey that included the Interpersonal Communication Competence Scale (ICCS) measuring self-perceived competence in interpersonal communication. Near the end of their placement, the clinical educator rated students’ performance with the Student Practice Evaluation Form - Revised edition (SPEF-R). The SPEF-R measures the eight core competency domains required for an entry-level occupational therapist. This study adopted the cross-sectional observational design. Pearson correlation and multiple regression are conducted to examine the relationship between students' interpersonal communication competence and their actual performance in clinical placement. Results: The ICCS total scores were significantly correlated with all the SPEF-R domains, with correlation coefficient r ranging from 0.39 to 0.51. The strongest association was found with the co-worker communication domain (r = 0.51, p < 0.01), followed by the information gathering domain (r = 0.50, p < 0.01). Regarding the ICCS total scores as the independent variable and the rating in various SPEF-R domains as the dependent variables in the multiple regression analyses, the interpersonal competence measures were identified as a significant predictor of the co-worker communication (R² = 0.33, β = 0.014, SE = 0.006, p = 0.026), information gathering (R² = 0.27, β = 0.018, SE = 0.007, p = 0.011), and service provision (R² = 0.17, β = 0.017, SE = 0.007, p = 0.020). Moreover, some specific communication skills appeared to be especially important to clinical practice. For example, immediacy, which means whether the students were readily approachable on all social occasions, correlated with all the SPEF-R domains, with r-values ranging from 0.45 to 0.33. Other sub-skills, such as empathy, interaction management, and supportiveness, were also found to be significantly correlated to most of the SPEF-R domains. Meanwhile, the ICCS scores correlated differently with the co-worker communication domain (r = 0.51, p < 0.01) and the communication with the service user domain (r = 0.39, p < 0.05). It suggested that different communication skill sets would be required for different interpersonal contexts within the workplace. Conclusion: Students' self-perceived interpersonal communication competence could predict their actual performance during clinical placement. Moreover, some specific communication skills were more important to the co-worker communication but not to the daily interaction with the service users. There were implications on how to better prepare the students to meet the future challenge upon graduation.

Keywords: interpersonal competence, clinical education, healthcare professional education, occupational therapy, occupational therapy students

Procedia PDF Downloads 72
834 Impact of Urbanization on Natural Drainage Pattern in District of Larkana, Sindh Pakistan

Authors: Sumaira Zafar, Arjumand Zaidi

Abstract:

During past few years, several floods have adversely affected the areas along lower Indus River. Besides other climate related anomalies, rapidly increasing urbanization and blockage of natural drains due to siltation or encroachments are two other critical causes that may be responsible for these disasters. Due to flat topography of river Indus plains and blockage of natural waterways, drainage of storm water takes time adversely affecting the crop health and soil properties of the area. Government of Sindh is taking a keen interest in revival of natural drainage network in the province and has initiated this work under Sindh Irrigation and Drainage Authority. In this paper, geospatial techniques are used to analyze landuse/land-cover changes of Larkana district over the past three decades (1980-present) and their impact on natural drainage system. Satellite derived Digital Elevation Model (DEM) and topographic sheets (recent and 1950) are used to delineate natural drainage pattern of the district. The urban landuse map developed in this study is further overlaid on drainage line layer to identify the critical areas where the natural floodwater flows are being inhibited by urbanization. Rainfall and flow data are utilized to identify areas of heavy flow, whereas, satellite data including Landsat 7 and Google Earth are used to map previous floods extent and landuse/cover of the study area. Alternatives to natural drainage systems are also suggested wherever possible. The output maps of natural drainage pattern can be used to develop a decision support system for urban planners, Sindh development authorities and flood mitigation and management agencies.

Keywords: geospatial techniques, satellite data, natural drainage, flood, urbanization

Procedia PDF Downloads 508
833 Ancient Cities of Deltaic Bengal: Origin and Nature on the Riverine Bed of Ganges Valley

Authors: Sajid Bin Doza

Abstract:

A town or a city contributes a lot to human mankind. City evolves memory, ambition, frustration and achievement. The city is something that offers life, as the character of the city is. A city is having confined image to the human being. Time place and matter generate this vive, city celebrates with its inhabitant, belongs and to care for each other. Apart from all these; although city and settlements are the contentious and changing phenomenon; the origin of the city in the very delta land started with unique and strategic sequences. Religious belief, topography, availability of resource and connection with commercial hub make the potential of the settlement. Ancient cities of Bengal are not the exception from these phenomenologies. From time immemorial; Bengal is enriched with numerous cities and notorious settlements. These cities and settlements were connected with other inland ports and Bengal became an important trade route, trailed by the Riverine connections. The delta land formation is valued for its geographic situation, consequences of this position; a new story or a new conception could be found in origin of an ancient city. However, the objective of this research is to understand the origin and spirit of the ancient city of Bengal, the research would also try to unfold the authentic and rational meaning of soul of the city, this research addresses the interest to elaborate the soul of the ancient sites of Riverine Delta. As rivers used to have the common character in this very landform; river supported community generated as well. River gives people wealth, sometimes fall us in sorrow. The river provides us commerce and trading. River gives us faith and religion. All these potentials have evolved from the Riverine excel. So the research would approach thoroughly to justify the riverine value as the soul for the ancient cities of Bengal. Cartographic information and illustration would be the preferred language for this research. Preferably, the historic mapping would be the unique folio of this study.

Keywords: memory of the city, riverine network, ancient cities, cartographic mapping, settlement pattern

Procedia PDF Downloads 294
832 Synthesis and Properties of Oxidized Corn Starch Based Wood Adhesive

Authors: Salise Oktay, Nilgun Kizilcan, Basak Bengu

Abstract:

At present, formaldehyde-based adhesives such as urea-formaldehyde (UF), melamine-formaldehyde (MF), melamine – urea-formaldehyde (MUF), etc. are mostly used in wood-based panel industry because of their high reactivity, chemical versatility, and economic competitiveness. However, formaldehyde-based wood adhesives are produced from non- renewable resources and also formaldehyde is classified as a probable human carcinogen (Group B1) by the U.S. Environmental Protection Agency (EPA). Therefore, there has been a growing interest in the development of environment-friendly, economically competitive, bio-based wood adhesives to meet wood-based panel industry requirements. In this study, like a formaldehyde-free adhesive, oxidized starch – urea wood adhesives was synthesized. In this scope, firstly, acid hydrolysis of corn starch was conducted and then acid thinned corn starch was oxidized by using hydrogen peroxide and CuSO₄ as an oxidizer and catalyst, respectively. Secondly, the polycondensation reaction between oxidized starch and urea conducted. Finally, nano – TiO₂ was added to the reaction system to strengthen the adhesive network. Solid content, viscosity, and gel time analyses of the prepared adhesive were performed to evaluate the adhesive processability. FTIR, DSC, TGA, SEM characterization techniques were used to investigate chemical structures, thermal, and morphological properties of the adhesive, respectively. Rheological analysis of the adhesive was also performed. In order to evaluate the quality of oxidized corn starch – urea adhesives, particleboards were produced in laboratory scale and mechanical and physical properties of the boards were investigated such as an internal bond, modulus of rupture, modulus of elasticity, formaldehyde emission, etc. The obtained results revealed that oxidized starch – urea adhesives were synthesized successfully and it can be a good potential candidate to use the wood-based panel industry with some developments.

Keywords: nano-TiO₂, corn starch, formaldehyde emission, wood adhesives

Procedia PDF Downloads 151
831 Theoretical Discussion on the Classification of Risks in Supply Chain Management

Authors: Liane Marcia Freitas Silva, Fernando Augusto Silva Marins, Maria Silene Alexandre Leite

Abstract:

The adoption of a network structure, like in the supply chains, favors the increase of dependence between companies and, by consequence, their vulnerability. Environment disasters, sociopolitical and economical events, and the dynamics of supply chains elevate the uncertainty of their operation, favoring the occurrence of events that can generate break up in the operations and other undesired consequences. Thus, supply chains are exposed to various risks that can influence the profitability of companies involved, and there are several previous studies that have proposed risk classification models in order to categorize the risks and to manage them. The objective of this paper is to analyze and discuss thirty of these risk classification models by means a theoretical survey. The research method adopted for analyzing and discussion includes three phases: The identification of the types of risks proposed in each one of the thirty models, the grouping of them considering equivalent concepts associated to their definitions, and, the analysis of these risks groups, evaluating their similarities and differences. After these analyses, it was possible to conclude that, in fact, there is more than thirty risks types identified in the literature of Supply Chains, but some of them are identical despite of be used distinct terms to characterize them, because different criteria for risk classification are adopted by researchers. In short, it is observed that some types of risks are identified as risk source for supply chains, such as, demand risk, environmental risk and safety risk. On the other hand, other types of risks are identified by the consequences that they can generate for the supply chains, such as, the reputation risk, the asset depreciation risk and the competitive risk. These results are consequence of the disagreements between researchers on risk classification, mainly about what is risk event and about what is the consequence of risk occurrence. An additional study is in developing in order to clarify how the risks can be generated, and which are the characteristics of the components in a Supply Chain that leads to occurrence of risk.

Keywords: sisks classification, survey, supply chain management, theoretical discussion

Procedia PDF Downloads 633
830 Roundabout Implementation Analyses Based on Traffic Microsimulation Model

Authors: Sanja Šurdonja, Aleksandra Deluka-Tibljaš, Mirna Klobučar, Irena Ištoka Otković

Abstract:

Roundabouts are a common choice in the case of reconstruction of an intersection, whether it is to improve the capacity of the intersection or traffic safety, especially in urban conditions. The regulation for the design of roundabouts is often related to driving culture, the tradition of using this type of intersection, etc. Individual values in the regulation are usually recommended in a wide range (this is the case in Croatian regulation), and the final design of a roundabout largely depends on the designer's experience and his/her choice of design elements. Therefore, before-after analyses are a good way to monitor the performance of roundabouts and possibly improve the recommendations of the regulation. This paper presents a comprehensive before-after analysis of a roundabout on the country road network near Rijeka, Croatia. The analysis is based on a thorough collection of traffic data (operating speeds and traffic load) and design elements data, both before and after the reconstruction into a roundabout. At the chosen location, the roundabout solution aimed to improve capacity and traffic safety. Therefore, the paper analyzed the collected data to see if the roundabout achieved the expected effect. A traffic microsimulation model (VISSIM) of the roundabout was created based on the real collected data, and the influence of the increase of traffic load and different traffic structures, as well as of the selected design elements on the capacity of the roundabout, were analyzed. Also, through the analysis of operating speeds and potential conflicts by application of the Surrogate Safety Assessment Model (SSAM), the traffic safety effect of the roundabout was analyzed. The results of this research show the practical value of before-after analysis as an indicator of roundabout effectiveness at a specific location. The application of a microsimulation model provides a practical method for analyzing intersection functionality from a capacity and safety perspective in present and changed traffic and design conditions.

Keywords: before-after analysis, operating speed, capacity, design.

Procedia PDF Downloads 23
829 Flexible PVC Based Nanocomposites With the Incorporation of Electric and Magnetic Nanofillers for the Shielding Against EMI and Thermal Imaging Signals

Authors: H. M. Fayzan Shakir, Khadija Zubair, Tingkai Zhao

Abstract:

Electromagnetic (EM) waves are being used widely now a days. Cell phone signals, WIFI signals, wireless telecommunications etc everything uses EM waves which then create EM pollution. EM pollution can cause serious effects on both human health and nearby electronic devices. EM waves have electric and magnetic components that disturb the flow of charged particles in both human nervous system and electronic devices. The shielding of both humans and electronic devices are a prime concern today. EM waves can cause headaches, anxiety, suicide and depression, nausea, fatigue and loss of libido in humans and malfunctioning in electronic devices. Polyaniline (PANI) and polypyrrole (PPY) were successfully synthesized using chemical polymerizing using ammonium persulfate and DBSNa as oxidant respectively. Barium ferrites (BaFe) were also prepared using co-precipitation method and calcinated at 10500C for 8h. Nanocomposite thin films with various combinations and compositions of Polyvinylchloride, PANI, PPY and BaFe were prepared. X-ray diffraction technique was first used to confirm the successful fabrication of all nano fillers and particle size analyzer to measure the exact size and scanning electron microscopy is used for the shape. According to Electromagnetic Interference theory, electrical conductivity is the prime property required for the Electromagnetic Interference shielding. 4-probe technique is then used to evaluate DC conductivity of all samples. Samples with high concentration of PPY and PANI exhibit remarkable increased electrical conductivity due to fabrication of interconnected network structure inside the Polyvinylchloride matrix that is also confirmed by SEM analysis. Less than 1% transmission was observed in whole NIR region (700 nm – 2500 nm). Also, less than -80 dB Electromagnetic Interference shielding effectiveness was observed in microwave region (0.1 GHz to 20 GHz).

Keywords: nanocomposites, polymers, EMI shielding, thermal imaging

Procedia PDF Downloads 106
828 Characterization and Correlation of Neurodegeneration and Biological Markers of Model Mice with Traumatic Brain Injury and Alzheimer's Disease

Authors: J. DeBoard, R. Dietrich, J. Hughes, K. Yurko, G. Harms

Abstract:

Alzheimer’s disease (AD) is a predominant type of dementia and is likely a major cause of neural network impairment. The pathogenesis of this neurodegenerative disorder has yet to be fully elucidated. There are currently no known cures for the disease, and the best hope is to be able to detect it early enough to impede its progress. Beyond age and genetics, another prevalent risk factor for AD might be traumatic brain injury (TBI), which has similar neurodegenerative hallmarks. Our research focuses on obtaining information and methods to be able to predict when neurodegenerative effects might occur at a clinical level by observation of events at a cellular and molecular level in model mice. First, we wish to introduce our evidence that brain damage can be observed via brain imaging prior to the noticeable loss of neuromuscular control in model mice of AD. We then show our evidence that some blood biomarkers might be able to be early predictors of AD in the same model mice. Thus, we were interested to see if we might be able to predict which mice might show long-term neurodegenerative effects due to differing degrees of TBI and what level of TBI causes further damage and earlier death to the AD model mice. Upon application of TBIs via an apparatus to effectively induce extremely mild to mild TBIs, wild-type (WT) mice and AD mouse models were tested for cognition, neuromuscular control, olfactory ability, blood biomarkers, and brain imaging. Experiments are currently still in process, and more results are therefore forthcoming. Preliminary data suggest that neuromotor control diminishes as well as olfactory function for both AD and WT mice after the administration of five consecutive mild TBIs. Also, seizure activity increases significantly for both AD and WT after the administration of the five TBI treatment. If future data supports these findings, important implications about the effect of TBI on those at risk for AD might be possible.

Keywords: Alzheimer's disease, blood biomarker, neurodegeneration, neuromuscular control, olfaction, traumatic brain injury

Procedia PDF Downloads 141
827 A Critical Reflection of Ableist Methodologies: Approaching Interviews and Go-Along Interviews

Authors: Hana Porkertová, Pavel Doboš

Abstract:

Based on a research project studying the experience of visually disabled people with urban space in the Czech Republic, the conference contribution discusses the limits of social-science methodologies used in sociology and human geography. It draws on actor-network theory, assuming that science does not describe reality but produces it. Methodology connects theory, research questions, ways to answer them (methods), and results. A research design utilizing ableist methodologies can produce ableist realities. Therefore, it was necessary to adjust the methods so that they could mediate blind experience to the scientific community without reproducing ableism. The researchers faced multiple challenges, ranging from questionable validity to how to research experience that differs from that of the researchers who are able-bodied. Finding a suitable theory that could be used as an analytical tool that would demonstrate space and blind experience as multiple, dynamic, and mutually constructed was the first step that could offer a range of potentially productive methods and research questions, as well as bring critically reflected results. Poststructural theory, mainly Deleuze-Guattarian philosophy, was chosen, and two methods were used: interviews and go-along interviews that had to be adjusted to be able to explore blind experience. In spite of a thorough preparation of these methods, new difficulties kept emerging, which exposed the ableist character of scientific knowledge. From the beginning of data collecting, there was an agreement to work in teams with slightly different roles of each of the researchers, which was significant especially during go-along interviews. In some cases, the anticipations of the researchers and participants differed, which led to unexpected and potentially dangerous situations. These were not caused only by the differences between scientific and lay communities but also between able-bodied and disabled people. Researchers were sometimes assigned to the assistants’ roles, and this new position – doing research together – required further negotiations, which also opened various ethical questions.

Keywords: ableist methodology, blind experience, go-along interviews, research ethics, scientific knowledge

Procedia PDF Downloads 165
826 Leadership in the Era of AI: Growing Organizational Intelligence

Authors: Mark Salisbury

Abstract:

The arrival of artificially intelligent avatars and the automation they bring is worrying many of us, not only for our livelihood but for the jobs that may be lost to our kids. We worry about what our place will be as human beings in this new economy where much of it will be conducted online in the metaverse – in a network of 3D virtual worlds – working with intelligent machines. The Future of Leadership was written to address these fears and show what our place will be – the right place – in this new economy of AI avatars, automation, and 3D virtual worlds. But to be successful in this new economy, our job will be to bring wisdom to our workplace and the marketplace. And we will use AI avatars and 3D virtual worlds to do it. However, this book is about more than AI and the avatars that we will work with in the metaverse. It’s about building Organizational intelligence (OI) -- the capability of an organization to comprehend and create knowledge relevant to its purpose; in other words, it is the intellectual capacity of the entire organization. To increase organizational intelligence requires a new kind of knowledge worker, a wisdom worker, that requires a new kind of leadership. This book begins your story for how to become a leader of wisdom workers and be successful in the emerging wisdom economy. After this presentation, conference participants will be able to do the following: Recognize the characteristics of the new generation of wisdom workers and how they differ from their predecessors. Recognize that new leadership methods and techniques are needed to lead this new generation of wisdom workers. Apply personal and professional values – personal integrity, belief in something larger than yourself, and keeping the best interest of others in mind – to improve your work performance and lead others. Exhibit an attitude of confidence, courage, and reciprocity of sharing knowledge to increase your productivity and influence others. Leverage artificial intelligence to accelerate your ability to learn, augment your decision-making, and influence others.Utilize new technologies to communicate with human colleagues and intelligent machines to develop better solutions more quickly.

Keywords: metaverse, generative artificial intelligence, automation, leadership, organizational intelligence, wisdom worker

Procedia PDF Downloads 44
825 Applying Multiplicative Weight Update to Skin Cancer Classifiers

Authors: Animish Jain

Abstract:

This study deals with using Multiplicative Weight Update within artificial intelligence and machine learning to create models that can diagnose skin cancer using microscopic images of cancer samples. In this study, the multiplicative weight update method is used to take the predictions of multiple models to try and acquire more accurate results. Logistic Regression, Convolutional Neural Network (CNN), and Support Vector Machine Classifier (SVMC) models are employed within the Multiplicative Weight Update system. These models are trained on pictures of skin cancer from the ISIC-Archive, to look for patterns to label unseen scans as either benign or malignant. These models are utilized in a multiplicative weight update algorithm which takes into account the precision and accuracy of each model through each successive guess to apply weights to their guess. These guesses and weights are then analyzed together to try and obtain the correct predictions. The research hypothesis for this study stated that there would be a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The SVMC model had an accuracy of 77.88%. The CNN model had an accuracy of 85.30%. The Logistic Regression model had an accuracy of 79.09%. Using Multiplicative Weight Update, the algorithm received an accuracy of 72.27%. The final conclusion that was drawn was that there was a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The conclusion was made that using a CNN model would be the best option for this problem rather than a Multiplicative Weight Update system. This is due to the possibility that Multiplicative Weight Update is not effective in a binary setting where there are only two possible classifications. In a categorical setting with multiple classes and groupings, a Multiplicative Weight Update system might become more proficient as it takes into account the strengths of multiple different models to classify images into multiple categories rather than only two categories, as shown in this study. This experimentation and computer science project can help to create better algorithms and models for the future of artificial intelligence in the medical imaging field.

Keywords: artificial intelligence, machine learning, multiplicative weight update, skin cancer

Procedia PDF Downloads 79
824 Developing a Spatial Transport Model to Determine Optimal Routes When Delivering Unprocessed Milk

Authors: Sunday Nanosi Ndovi, Patrick Albert Chikumba

Abstract:

In Malawi, smallholder dairy farmers transport unprocessed milk to sell at Milk Bulking Groups (MBGs). MBGs store and chill the milk while awaiting collection by processors. The farmers deliver milk using various modes of transportation such as foot, bicycle, and motorcycle. As a perishable food, milk requires timely transportation to avoid deterioration. In other instances, some farmers bypass the nearest MBGs for facilities located further away. Untimely delivery worsens quality and results in rejection at MBG. Subsequently, these rejections lead to revenue losses for dairy farmers. Therefore, the objective of this study was to optimize routes when transporting milk by selecting the shortest route using time as a cost attribute in Geographic Information Systems (GIS). A spatially organized transport system impedes milk deterioration while promoting profitability for dairy farmers. A transportation system was modeled using Route Analysis and Closest Facility network extensions. The final output was to find the quickest routes and identify the nearest milk facilities from incidents. Face-to-face interviews targeted leaders from all 48 MBGs in the study area and 50 farmers from Namahoya MBG. During field interviews, coordinates were captured in order to create maps. Subsequently, maps supported the selection of optimal routes based on the least travel times. The questionnaire targeted 200 respondents. Out of the total, 182 respondents were available. Findings showed that out of the 50 sampled farmers that supplied milk to Namahoya, only 8% were nearest to the facility, while 92% were closest to 9 different MBGs. Delivering milk to the nearest MBGs would minimize travel time and distance by 14.67 hours and 73.37 km, respectively.

Keywords: closest facility, milk, route analysis, spatial transport

Procedia PDF Downloads 58
823 Digital Media Use and Access among Rural Youth in South Africa: The Prospects for Female Empowerment

Authors: Fulufhelo Oscar Makananise

Abstract:

Digital technologies have played a significant role in bridging the information gap between the haves and the have nots in society. In developing countries such as South Africa, historically marginalised groups such as women in rural communities have an opportunity to use digital technologies to network among themselves as well as interact with their government, thereby enhancing prospects for poverty eradication, political participation, community development and democracy. However, the extent to which these goals can be achieved in a developing context through harnessing digital technologies is not quite clear, particularly given the fact that access to these technologies is not evenly distributed and the fact that women’s access to digital technologies is hampered by factors that go beyond the question of infrastructure. Informed by the technological dependency theory, this paper is about how female youth in rural South Africa are deploying digital media tools for socio-economic empowerment. In particular, the study investigated the extent to which female youth in Limpopo province, South Africa access and use digital media platforms and gadgets and the extent to which those technologies are breaking down barriers that stand in the way of female youth empowerment. Data were gathered using a self-administered questionnaire disseminated to selected 100 female youth in Limpopo Province, South Africa. The data were analysed using SPSS version 9, and the results were analysed using descriptive statistics. The paper argues that wider and constant access to digital media by female youth in rural areas is indicative of the great potential for empowering female youth in rural areas through harnessing digital media. The study established that the majority of female youth had access to digital media technologies and used them to share valuable information among themselves. The study further established that female youth are active users of digital media in South Africa, which is the significant driver for socio-economic empowerment.

Keywords: digital technologies, empowerment, female youth, South Africa, survey, technological dependency

Procedia PDF Downloads 132
822 Vehicle Activity Characterization Approach to Quantify On-Road Mobile Source Emissions

Authors: Hatem Abou-Senna, Essam Radwan

Abstract:

Transportation agencies and researchers in the past have estimated emissions using one average speed and volume on a long stretch of roadway. Other methods provided better accuracy utilizing annual average estimates. Travel demand models provided an intermediate level of detail through average daily volumes. Currently, higher accuracy can be established utilizing microscopic analyses by splitting the network links into sub-links and utilizing second-by-second trajectories to calculate emissions. The need to accurately quantify transportation-related emissions from vehicles is essential. This paper presents an examination of four different approaches to capture the environmental impacts of vehicular operations on a 10-mile stretch of Interstate 4 (I-4), an urban limited access highway in Orlando, Florida. First, (at the most basic level), emissions were estimated for the entire 10-mile section 'by hand' using one average traffic volume and average speed. Then, three advanced levels of detail were studied using VISSIM/MOVES to analyze smaller links: average speeds and volumes (AVG), second-by-second link drive schedules (LDS), and second-by-second operating mode distributions (OPMODE). This paper analyzes how the various approaches affect predicted emissions of CO, NOx, PM2.5, PM10, and CO2. The results demonstrate that obtaining precise and comprehensive operating mode distributions on a second-by-second basis provides more accurate emission estimates. Specifically, emission rates are highly sensitive to stop-and-go traffic and the associated driving cycles of acceleration, deceleration, and idling. Using the AVG or LDS approach may overestimate or underestimate emissions, respectively, compared to an operating mode distribution approach.

Keywords: limited access highways, MOVES, operating mode distribution (OPMODE), transportation emissions, vehicle specific power (VSP)

Procedia PDF Downloads 339
821 Hedgerow Detection and Characterization Using Very High Spatial Resolution SAR DATA

Authors: Saeid Gharechelou, Stuart Green, Fiona Cawkwell

Abstract:

Hedgerow has an important role for a wide range of ecological habitats, landscape, agriculture management, carbon sequestration, wood production. Hedgerow detection accurately using satellite imagery is a challenging problem in remote sensing techniques, because in the special approach it is very similar to line object like a road, from a spectral viewpoint, a hedge is very similar to a forest. Remote sensors with very high spatial resolution (VHR) recently enable the automatic detection of hedges by the acquisition of images with enough spectral and spatial resolution. Indeed, recently VHR remote sensing data provided the opportunity to detect the hedgerow as line feature but still remain difficulties in monitoring the characterization in landscape scale. In this research is used the TerraSAR-x Spotlight and Staring mode with 3-5 m resolution in wet and dry season in the test site of Fermoy County, Ireland to detect the hedgerow by acquisition time of 2014-2015. Both dual polarization of Spotlight data in HH/VV is using for detection of hedgerow. The varied method of SAR image technique with try and error way by integration of classification algorithm like texture analysis, support vector machine, k-means and random forest are using to detect hedgerow and its characterization. We are applying the Shannon entropy (ShE) and backscattering analysis in single and double bounce in polarimetric analysis for processing the object-oriented classification and finally extracting the hedgerow network. The result still is in progress and need to apply the other method as well to find the best method in study area. Finally, this research is under way to ahead to get the best result and here just present the preliminary work that polarimetric image of TSX potentially can detect the hedgerow.

Keywords: TerraSAR-X, hedgerow detection, high resolution SAR image, dual polarization, polarimetric analysis

Procedia PDF Downloads 230
820 Preparing Data for Calibration of Mechanistic-Empirical Pavement Design Guide in Central Saudi Arabia

Authors: Abdulraaof H. Alqaili, Hamad A. Alsoliman

Abstract:

Through progress in pavement design developments, a pavement design method was developed, which is titled the Mechanistic Empirical Pavement Design Guide (MEPDG). Nowadays, the evolution in roads network and highways is observed in Saudi Arabia as a result of increasing in traffic volume. Therefore, the MEPDG currently is implemented for flexible pavement design by the Saudi Ministry of Transportation. Implementation of MEPDG for local pavement design requires the calibration of distress models under the local conditions (traffic, climate, and materials). This paper aims to prepare data for calibration of MEPDG in Central Saudi Arabia. Thus, the first goal is data collection for the design of flexible pavement from the local conditions of the Riyadh region. Since, the modifying of collected data to input data is needed; the main goal of this paper is the analysis of collected data. The data analysis in this paper includes processing each: Trucks Classification, Traffic Growth Factor, Annual Average Daily Truck Traffic (AADTT), Monthly Adjustment Factors (MAFi), Vehicle Class Distribution (VCD), Truck Hourly Distribution Factors, Axle Load Distribution Factors (ALDF), Number of axle types (single, tandem, and tridem) per truck class, cloud cover percent, and road sections selected for the local calibration. Detailed descriptions of input parameters are explained in this paper, which leads to providing of an approach for successful implementation of MEPDG. Local calibration of MEPDG to the conditions of Riyadh region can be performed based on the findings in this paper.

Keywords: mechanistic-empirical pavement design guide (MEPDG), traffic characteristics, materials properties, climate, Riyadh

Procedia PDF Downloads 226
819 Optimization of Platinum Utilization by Using Stochastic Modeling of Carbon-Supported Platinum Catalyst Layer of Proton Exchange Membrane Fuel Cells

Authors: Ali Akbar, Seungho Shin, Sukkee Um

Abstract:

The composition of catalyst layers (CLs) plays an important role in the overall performance and cost of the proton exchange membrane fuel cells (PEMFCs). Low platinum loading, high utilization, and more durable catalyst still remain as critical challenges for PEMFCs. In this study, a three-dimensional material network model is developed to visualize the nanostructure of carbon supported platinum Pt/C and Pt/VACNT catalysts in pursuance of maximizing the catalyst utilization. The quadruple-phase randomly generated CLs domain is formulated using quasi-random stochastic Monte Carlo-based method. This unique statistical approach of four-phase (i.e., pore, ionomer, carbon, and platinum) model is closely mimic of manufacturing process of CLs. Various CLs compositions are simulated to elucidate the effect of electrons, ions, and mass transport paths on the catalyst utilization factor. Based on simulation results, the effect of key factors such as porosity, ionomer contents and Pt weight percentage in Pt/C catalyst have been investigated at the represented elementary volume (REV) scale. The results show that the relationship between ionomer content and Pt utilization is in good agreement with existing experimental calculations. Furthermore, this model is implemented on the state-of-the-art Pt/VACNT CLs. The simulation results on Pt/VACNT based CLs show exceptionally high catalyst utilization as compared to Pt/C with different composition ratios. More importantly, this study reveals that the maximum catalyst utilization depends on the distance spacing between the carbon nanotubes for Pt/VACNT. The current simulation results are expected to be utilized in the optimization of nano-structural construction and composition of Pt/C and Pt/VACNT CLs.

Keywords: catalyst layer, platinum utilization, proton exchange membrane fuel cell, stochastic modeling

Procedia PDF Downloads 121
818 Enriched Education: The Classroom as a Learning Network through Video Game Narrative Development

Authors: Wayne DeFehr

Abstract:

This study is rooted in a pedagogical approach that emphasizes student engagement as fundamental to meaningful learning in the classroom. This approach creates a paradigmatic shift, from a teaching practice that reinforces the teacher’s central authority to a practice that disperses that authority among the students in the classroom through networks that they themselves develop. The methodology of this study about creating optimal conditions for learning in the classroom includes providing a conceptual framework within which the students work, as well as providing clearly stated expectations for work standards, content quality, group methodology, and learning outcomes. These learning conditions are nurtured in a variety of ways. First, nearly every class includes a lecture from the professor with key concepts that students need in order to complete their work successfully. Secondly, students build on this scholarly material by forming their own networks, where students face each other and engage with each other in order to collaborate their way to solving a particular problem relating to the course content. Thirdly, students are given short, medium, and long-term goals. Short term goals relate to the week’s topic and involve workshopping particular issues relating to that stage of the course. The medium-term goals involve students submitting term assignments that are evaluated according to a well-defined rubric. And finally, long-term goals are achieved by creating a capstone project, which is celebrated and shared with classmates and interested friends on the final day of the course. The essential conclusions of the study are drawn from courses that focus on video game narrative. Enthusiastic student engagement is created not only with the dynamic energy and expertise of the instructor, but also with the inter-dependence of the students on each other to build knowledge, acquire skills, and achieve successful results.

Keywords: collaboration, education, learning networks, video games

Procedia PDF Downloads 116
817 Findings from an Access Improvement Project for Antiretroviral Therapy Uptake through Traditional Birth Attendants at Mother Theresa Hospital, Lagos, Nigeria

Authors: Daniel Afolayan, Christina Olawepo, Francis Olowookanga, Nguhemen Tingir, Olawale Fadare, John Oko

Abstract:

In Nigeria, traditional birth attendants (TBAs) can play an important role in the prevention of mother-to-child transmission of HIV. However, their role in improving access to antiretroviral therapy (ART) is unclear. Catholic Caritas Foundation of Nigeria (Caritas Nigeria) is an implementing agency supporting increased access to HIV testing and treatment services in Lagos state through health facilities including Mother Theresa Hospital. Despite intra-facility testing and community outreaches, ART uptake at Mother Theresa Hospital, Lagos was low with 6 individuals on antiretroviral drugs 3 months post-activation. This study explored improving access to ART through linkages with TBAs for ART uptake at the facility. Plan-Do-Study-Act model was used. The goal was to improve uptake of ART from 6 to 80 in 5 months (end of project year). Scanning revealed a network of 15 TBAs with potential as satellites for HIV testing. Caritas Nigeria linked the facility with 15 TBAs who were provided with HIV test kits and trained on HIV testing services for provider-initiated testing and outreaches. Weekly reports and referrals of positives were received, tracked and feedback given on testing yield. These TBAs serve individuals of various age and gender at their trado-medical centres. At the end of 5 months, HIV testing increased by 10,575 (78% from TBAs) and HIV positives obtained improved by 77 (44.2% from TBAs). 55 new individuals were enrolled and commenced on ART (61.8% from TBAs). There was a successful linkage of all clients with escort services due to incentives. Total uptake of ART was 61 (76.3% of target). Structured partnerships between TBAs and HIV care and treatment centers should be strengthened to improve access to ART.

Keywords: access improvement, antiretroviral therapy, traditional birth attendants, uptake

Procedia PDF Downloads 460
816 Human Factors Considerations in New Generation Fighter Planes to Enhance Combat Effectiveness

Authors: Chitra Rajagopal, Indra Deo Kumar, Ruchi Joshi, Binoy Bhargavan

Abstract:

Role of fighter planes in modern network centric military warfare scenarios has changed significantly in the recent past. New generation fighter planes have multirole capability of engaging both air and ground targets with high precision. Multirole aircraft undertakes missions such as Air to Air combat, Air defense, Air to Surface role (including Air interdiction, Close air support, Maritime attack, Suppression and Destruction of enemy air defense), Reconnaissance, Electronic warfare missions, etc. Designers have primarily focused on development of technologies to enhance the combat performance of the fighter planes and very little attention is given to human factor aspects of technologies. Unique physical and psychological challenges are imposed on the pilots to meet operational requirements during these missions. Newly evolved technologies have enhanced aircraft performance in terms of its speed, firepower, stealth, electronic warfare, situational awareness, and vulnerability reduction capabilities. This paper highlights the impact of emerging technologies on human factors for various military operations and missions. Technologies such as ‘cooperative knowledge-based systems’ to aid pilot’s decision making in military conflict scenarios as well as simulation technologies to enhance human performance is also studied as a part of research work. Current and emerging pilot protection technologies and systems which form part of the integrated life support systems in new generation fighter planes is discussed. System safety analysis application to quantify the human reliability in military operations is also studied.

Keywords: combat effectiveness, emerging technologies, human factors, systems safety analysis

Procedia PDF Downloads 142
815 Optimization of Structures with Mixed Integer Non-linear Programming (MINLP)

Authors: Stojan Kravanja, Andrej Ivanič, Tomaž Žula

Abstract:

This contribution focuses on structural optimization in civil engineering using mixed integer non-linear programming (MINLP). MINLP is characterized as a versatile method that can handle both continuous and discrete optimization variables simultaneously. Continuous variables are used to optimize parameters such as dimensions, stresses, masses, or costs, while discrete variables represent binary decisions to determine the presence or absence of structural elements within a structure while also calculating discrete materials and standard sections. The optimization process is divided into three main steps. First, a mechanical superstructure with a variety of different topology-, material- and dimensional alternatives. Next, a MINLP model is formulated to encapsulate the optimization problem. Finally, an optimal solution is searched in the direction of the defined objective function while respecting the structural constraints. The economic or mass objective function of the material and labor costs of a structure is subjected to the constraints known from structural analysis. These constraints include equations for the calculation of internal forces and deflections, as well as equations for the dimensioning of structural components (in accordance with the Eurocode standards). Given the complex, non-convex and highly non-linear nature of optimization problems in civil engineering, the Modified Outer-Approximation/Equality-Relaxation (OA/ER) algorithm is applied. This algorithm alternately solves subproblems of non-linear programming (NLP) and main problems of mixed-integer linear programming (MILP), in this way gradually refines the solution space up to the optimal solution. The NLP corresponds to the continuous optimization of parameters (with fixed topology, discrete materials and standard dimensions, all determined in the previous MILP), while the MILP involves a global approximation to the superstructure of alternatives, where a new topology, materials, standard dimensions are determined. The optimization of a convex problem is stopped when the MILP solution becomes better than the best NLP solution. Otherwise, it is terminated when the NLP solution can no longer be improved. While the OA/ER algorithm, like all other algorithms, does not guarantee global optimality due to the presence of non-convex functions, various modifications, including convexity tests, are implemented in OA/ER to mitigate these difficulties. The effectiveness of the proposed MINLP approach is demonstrated by its application to various structural optimization tasks, such as mass optimization of steel buildings, cost optimization of timber halls, composite floor systems, etc. Special optimization models have been developed for the optimization of these structures. The MINLP optimizations, facilitated by the user-friendly software package MIPSYN, provide insights into a mass or cost-optimal solutions, optimal structural topologies, optimal material and standard cross-section choices, confirming MINLP as a valuable method for the optimization of structures in civil engineering.

Keywords: MINLP, mixed-integer non-linear programming, optimization, structures

Procedia PDF Downloads 46
814 Computational Aided Approach for Strut and Tie Model for Non-Flexural Elements

Authors: Mihaja Razafimbelo, Guillaume Herve-Secourgeon, Fabrice Gatuingt, Marina Bottoni, Tulio Honorio-De-Faria

Abstract:

The challenge of the research is to provide engineering with a robust, semi-automatic method for calculating optimal reinforcement for massive structural elements. In the absence of such a digital post-processing tool, design office engineers make intensive use of plate modelling, for which automatic post-processing is available. Plate models in massive areas, on the other hand, produce conservative results. In addition, the theoretical foundations of automatic post-processing tools for reinforcement are those of reinforced concrete beam sections. As long as there is no suitable alternative for automatic post-processing of plates, optimal modelling and a significant improvement of the constructability of massive areas cannot be expected. A method called strut-and-tie is commonly used in civil engineering, but the result itself remains very subjective to the calculation engineer. The tool developed will facilitate the work of supporting the engineers in their choice of structure. The method implemented consists of defining a ground-structure built on the basis of the main constraints resulting from an elastic analysis of the structure and then to start an optimization of this structure according to the fully stressed design method. The first results allow to obtain a coherent return in the first network of connecting struts and ties, compared to the cases encountered in the literature. The evolution of the tool will then make it possible to adapt the obtained latticework in relation to the cracking states resulting from the loads applied during the life of the structure, cyclic or dynamic loads. In addition, with the constructability constraint, a final result of reinforcement with an orthogonal arrangement with a regulated spacing will be implemented in the tool.

Keywords: strut and tie, optimization, reinforcement, massive structure

Procedia PDF Downloads 141
813 Transcriptome Analysis Reveals Role of Long Non-Coding RNA NEAT1 in Dengue Patients

Authors: Abhaydeep Pandey, Shweta Shukla, Saptamita Goswami, Bhaswati Bandyopadhyay, Vishnampettai Ramachandran, Sudhanshu Vrati, Arup Banerjee

Abstract:

Background: Long non-coding RNAs (lncRNAs) are the important regulators of gene expression and play important role in viral replication and disease progression. The role of lncRNA genes in the pathogenesis of Dengue virus-mediated pathogenesis is currently unknown. Methods: To gain additional insights, we utilized an unbiased RNA sequencing followed by in silico analysis approach to identify the differentially expressed lncRNA and genes that are associated with dengue disease progression. Further, we focused our study on lncRNAs NEAT1 (Nuclear Paraspeckle Assembly Transcript 1) as it was found to be differentially expressed in PBMC of dengue infected patients. Results: The expression of lncRNAs NEAT1, as compared to dengue infection (DI), was significantly down-regulated as the patients developed the complication. Moreover, pairwise analysis on follow up patients confirmed that suppression of NEAT1 expression was associated with rapid fall in platelet count in dengue infected patients. Severe dengue patients (DS) (n=18; platelet count < 20K) when recovered from infection showing high NEAT1 expression as it observed in healthy donors. By co-expression network analysis and subsequent validation, we revealed that coding gene; IFI27 expression was significantly up-regulated in severe dengue cases and negatively correlated with NEAT1 expression. To discriminate DI from dengue severe, receiver operating characteristic (ROC) curve was calculated. It revealed sensitivity and specificity of 100% (95%CI: 85.69 – 97.22) and area under the curve (AUC) = 0.97 for NEAT1. Conclusions: Altogether, our first observations demonstrate that monitoring NEAT1and IFI27 expression in dengue patients could be useful in understanding dengue virus-induced disease progression and may be involved in pathophysiological processes.

Keywords: dengue, lncRNA, NEAT1, transcriptome

Procedia PDF Downloads 310
812 Examining the Drivers of Engagement in Social Media Brand Communities

Authors: Rania S. Hussein

Abstract:

This research mainly focuses on examining engagement in social media brand communities. Engagement in social media has become a main focus in literature affirming that the role of social media in our daily lives is growing. (Akman and Mishra, 2017;Prado-Gascó et al., 2017). Social media has also become a key medium for brand communication and brand building relationships(Frimpong and McLean,2018;Dimitriu and Guesalaga, 2017). Engagement on social media has become a main focus of many researchers who tried to understand this concept further and draw a link between engagement and various social media activities (Cvijikj and Michahelles;2013), Andre,2015; Wang et al., 2015). According to Felix et al. (2017), the internet and social media have provided better digital resources to improve brand loyalty and customer interactions, thus leading to social media engagement within brand communities. The aim of this research is to highlight the importance of social media and why it is important to maintain engagement within social media. While the term ‘engagement’ is widely used in scholarly literature, there isn’t a common consensus about what the term exactly entails, according to Kidd, (2011). On one hand, it was seen as something that includes factors such as participation, activation, empowerment, devotion, trust, and productivity (Zhang et al, andBenyoucef, M. (2016), ). Other scholars held different viewpoints. For example, Lim et al. (2015) has chosen to break down engagement into three types: operational engagement, emotional engagement, and relational engagement. Chandler and Lusch (2015) further studied engagement as a means to measure commitment to a brand. Fernandes&Remelhe (2016) had a more technical view, measuring engagement through comments, following, subscribing, sharing, enjoying, writing, etc., in the social media context. ustomer engagement has become a research focus for understanding how consumer relationships are developed, retained, and improved within a digital context. Based on previous literature, it is evident that many customer engagement related studies are limited to the interaction between firms and consumers on social media. There is a clear gap in the literature regarding consumer-to-consumer interaction and user-generated content and its significance. While some researchers, such as Alversia et al. (2016), touched upon the importance of customer-based engagement, a gap still remains: there is no consistent and well-tested method for defining the factors that affect consumer interaction. Moreover, few scholarly research papers such as (Case, 2019; Riley, 2020;Habibi, 2014) provided to assist businesses understand their customers' interaction habits as well as the best ways to develop customer loyalty. Additionally, the majority of research on brand pages concentrated on the drivers of Consumer engagement, with just a few studies example, Lamberton, Cc(2016), Poorrezaei, (2016). (Jayasingh, 2019), looking into the implications. This study focuses on understanding the concept of engagement and its importance, specifically engagement within social media brand communities. It examines drivers as well as consequences of engagement, including brand knowledge, brand trust, entertainment, and brand page interactivity. Brand engagement is also expected to affect brand loyalty and word of the mouth.

Keywords: engagement, social media, brand communities, drivers

Procedia PDF Downloads 160
811 Linguistic Analysis of Borderline Personality Disorder: Using Language to Predict Maladaptive Thoughts and Behaviours

Authors: Charlotte Entwistle, Ryan Boyd

Abstract:

Recent developments in information retrieval techniques and natural language processing have allowed for greater exploration of psychological and social processes. Linguistic analysis methods for understanding behaviour have provided useful insights within the field of mental health. One area within mental health that has received little attention though, is borderline personality disorder (BPD). BPD is a common mental health disorder characterised by instability of interpersonal relationships, self-image and affect. It also manifests through maladaptive behaviours, such as impulsivity and self-harm. Examination of language patterns associated with BPD could allow for a greater understanding of the disorder and its links to maladaptive thoughts and behaviours. Language analysis methods could also be used in a predictive way, such as by identifying indicators of BPD or predicting maladaptive thoughts, emotions and behaviours. Additionally, associations that are uncovered between language and maladaptive thoughts and behaviours could then be applied at a more general level. This study explores linguistic characteristics of BPD, and their links to maladaptive thoughts and behaviours, through the analysis of social media data. Data were collected from a large corpus of posts from the publicly available social media platform Reddit, namely, from the ‘r/BPD’ subreddit whereby people identify as having BPD. Data were collected using the Python Reddit API Wrapper and included all users which had posted within the BPD subreddit. All posts were manually inspected to ensure that they were not posted by someone who clearly did not have BPD, such as people posting about a loved one with BPD. These users were then tracked across all other subreddits of which they had posted in and data from these subreddits were also collected. Additionally, data were collected from a random control group of Reddit users. Disorder-relevant behaviours, such as self-harming or aggression-related behaviours, outlined within Reddit posts were coded to by expert raters. All posts and comments were aggregated by user and split by subreddit. Language data were then analysed using the Linguistic Inquiry and Word Count (LIWC) 2015 software. LIWC is a text analysis program that identifies and categorises words based on linguistic and paralinguistic dimensions, psychological constructs and personal concern categories. Statistical analyses of linguistic features could then be conducted. Findings revealed distinct linguistic features associated with BPD, based on Reddit posts, which differentiated these users from a control group. Language patterns were also found to be associated with the occurrence of maladaptive thoughts and behaviours. Thus, this study demonstrates that there are indeed linguistic markers of BPD present on social media. It also implies that language could be predictive of maladaptive thoughts and behaviours associated with BPD. These findings are of importance as they suggest potential for clinical interventions to be provided based on the language of people with BPD to try to reduce the likelihood of maladaptive thoughts and behaviours occurring. For example, by social media tracking or engaging people with BPD in expressive writing therapy. Overall, this study has provided a greater understanding of the disorder and how it manifests through language and behaviour.

Keywords: behaviour analysis, borderline personality disorder, natural language processing, social media data

Procedia PDF Downloads 349
810 Enhancing the Resilience of Combat System-Of-Systems Under Certainty and Uncertainty: Two-Phase Resilience Optimization Model and Deep Reinforcement Learning-Based Recovery Optimization Method

Authors: Xueming Xu, Jiahao Liu, Jichao Li, Kewei Yang, Minghao Li, Bingfeng Ge

Abstract:

A combat system-of-systems (CSoS) comprises various types of functional combat entities that interact to meet corresponding task requirements in the present and future. Enhancing the resilience of CSoS holds significant military value in optimizing the operational planning process, improving military survivability, and ensuring the successful completion of operational tasks. Accordingly, this research proposes an integrated framework called CSoS resilience enhancement (CSoSRE) to enhance the resilience of CSoS from a recovery perspective. Specifically, this research presents a two-phase resilience optimization model to define a resilience optimization objective for CSoS. This model considers not only task baseline, recovery cost, and recovery time limit but also the characteristics of emergency recovery and comprehensive recovery. Moreover, the research extends it from the deterministic case to the stochastic case to describe the uncertainty in the recovery process. Based on this, a resilience-oriented recovery optimization method based on deep reinforcement learning (RRODRL) is proposed to determine a set of entities requiring restoration and their recovery sequence, thereby enhancing the resilience of CSoS. This method improves the deep Q-learning algorithm by designing a discount factor that adapts to changes in CSoS state at different phases, simultaneously considering the network’s structural and functional characteristics within CSoS. Finally, extensive experiments are conducted to test the feasibility, effectiveness and superiority of the proposed framework. The obtained results offer useful insights for guiding operational recovery activity and designing a more resilient CSoS.

Keywords: combat system-of-systems, resilience optimization model, recovery optimization method, deep reinforcement learning, certainty and uncertainty

Procedia PDF Downloads 16