Search results for: big data in higher education
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 35190

Search results for: big data in higher education

28740 Cyberstalking as an Online Sexual Harassment: Evidence from Experience from Female University Students in Tanzanian Institutions of Higher Learning

Authors: Angela Mathias Kavishe

Abstract:

Sexual harassment directed at women is reported in many societies, including in Tanzania. The advent of ICT technology, especially in universities, seems to aggravate the situation by extending harassment to cyberspace in various forms, including cyberstalking. Evidence shows that online violence is more dangerous than physical one due to the ability to access multiple private information, attack many victims, mask the perpetrator's identity, suspend the threat for a long time and spread over time and space. The study aimed to measure the magnitude of cyber harassment in Tanzanian higher learning institutions and to assess institutional sensitivity to ICT-mediated gender-based violence. It was carried out in 4 higher learning institutions in Tanzania: Mwalimu Nyerere Memorial Academy and Institute of Finance Management in Dar es Salaam and SAUT, and the University of Dodoma, where a survey questionnaire was distributed to 400 students and 40 key informants were interviewed. It was found that in each institution, the majority of female students experienced online harassment on social media perpetrated by ex-partners, male students, and university male teaching staff. The perpetrators compelled the female students to post nude pictures, have sexual relations with them, or utilize the posted private photographs to force female students to practice online or offline sexual relations. These threats seem to emanate from social-cultural beliefs about the subordinate position of women in society and that women's bodies are perceived as sex objects. It is therefore concluded that cyberspace provides an alternative space for perpetrators to exercise violence towards women.

Keywords: cyberstalking, embodiment, gender-based violence, internet

Procedia PDF Downloads 22
28739 Hearing Threshold Levels among Steel Industry Workers in Samut Prakan Province, Thailand

Authors: Petcharat  Kerdonfag, Surasak Taneepanichskul, Winai Wadwongtham

Abstract:

Industrial noise is usually considered as the main impact of the environmental health and safety because its exposure can cause permanently serious hearing damage. Despite providing strictly hearing protection standards and campaigning extensively encouraging public health awareness among industrial workers in Thailand, hazard noise-induced hearing loss has dramatically been massive obstacles for workers’ health. The aims of the study were to explore and specify the hearing threshold levels among steel industrial workers responsible in which higher noise levels of work zone and to examine the relationships of hearing loss and workers’ age and the length of employment in Samut Prakan province, Thailand. Cross-sectional study design was done. Ninety-three steel industrial workers in the designated zone of higher noise (> 85dBA) with more than 1 year of employment from two factories by simple random sampling and available to participate in were assessed by the audiometric screening at regional Samut Prakan hospital. Data of doing screening were collected from October to December, 2016 by the occupational medicine physician and a qualified occupational nurse. All participants were examined by the same examiners for the validity. An Audiometric testing was performed at least 14 hours after the last noise exposure from the workplace. Workers’ age and the length of employment were gathered by the developed occupational record form. Results: The range of workers’ age was from 23 to 59 years, (Mean = 41.67, SD = 9.69) and the length of employment was from 1 to 39 years, (Mean = 13.99, SD = 9.88). Fifty three (60.0%) out of all participants have been exposing to the hazard of noise in the workplace for more than 10 years. Twenty-three (24.7%) of them have been exposing to the hazard of noise less than or equal to 5 years. Seventeen (18.3%) of them have been exposing to the hazard of noise for 5 to 10 years. Using the cut point of less than or equal to 25 dBA of hearing thresholds, the average means of hearing thresholds for participants at 4, 6, and 8 kHz were 31.34, 29.62, and 25.64 dB, respectively for the right ear and 40.15, 32.20, and 25.48 dB for the left ear, respectively. The more developing age of workers in the work zone with hazard of noise, the more the hearing thresholds would be increasing at frequencies of 4, 6, and 8 kHz (p =.012, p =.026, p =.024) for the right ear, respectively and for the left ear only at the frequency 4 kHz (p =.009). Conclusion: The participants’ age in the hazard of noise work zone was significantly associated with the hearing loss in different levels while the length of participants’ employment was not significantly associated with the hearing loss. Thus hearing threshold levels among industrial workers would be regularly assessed and needed to be protected at the beginning of working.

Keywords: hearing threshold levels, hazard of noise, hearing loss, audiometric testing

Procedia PDF Downloads 212
28738 The Effect of Data Integration to the Smart City

Authors: Richard Byrne, Emma Mulliner

Abstract:

Smart cities are a vision for the future that is increasingly becoming a reality. While a key concept of the smart city is the ability to capture, communicate, and process data that has long been produced through day-to-day activities of the city, much of the assessment models in place neglect this fact to focus on ‘smartness’ concepts. Although it is true technology often provides the opportunity to capture and communicate data in more effective ways, there are also human processes involved that are just as important. The growing importance with regards to the use and ownership of data in society can be seen by all with companies such as Facebook and Google increasingly coming under the microscope, however, why is the same scrutiny not applied to cities? The research area is therefore of great importance to the future of our cities here and now, while the findings will be of just as great importance to our children in the future. This research aims to understand the influence data is having on organisations operating throughout the smart cities sector and employs a mixed-method research approach in order to best answer the following question: Would a data-based evaluation model for smart cities be more appropriate than a smart-based model in assessing the development of the smart city? A fully comprehensive literature review concluded that there was a requirement for a data-driven assessment model for smart cities. This was followed by a documentary analysis to understand the root source of data integration to the smart city. A content analysis of city data platforms enquired as to the alternative approaches employed by cities throughout the UK and draws on best practice from New York to compare and contrast. Grounded in theory, the research findings to this point formulated a qualitative analysis framework comprised of: the changing environment influenced by data, the value of data in the smart city, the data ecosystem of the smart city and organisational response to the data orientated environment. The framework was applied to analyse primary data collected through the form of interviews with both public and private organisations operating throughout the smart cities sector. The work to date represents the first stage of data collection that will be built upon by a quantitative research investigation into the feasibility of data network effects in the smart city. An analysis into the benefits of data interoperability supporting services to the smart city in the areas of health and transport will conclude the research to achieve the aim of inductively forming a framework that can be applied to future smart city policy. To conclude, the research recognises the influence of technological perspectives in the development of smart cities to date and highlights this as a challenge to introduce theory applied with a planning dimension. The primary researcher has utilised their experience working in the public sector throughout the investigation to reflect upon what is perceived as a gap in practice of where we are today, to where we need to be tomorrow.

Keywords: data, planning, policy development, smart cities

Procedia PDF Downloads 300
28737 Investigation of Delivery of Triple Play Service in GE-PON Fiber to the Home Network

Authors: Anurag Sharma, Dinesh Kumar, Rahul Malhotra, Manoj Kumar

Abstract:

Fiber based access networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This paper is targeted to show the simultaneous delivery of triple play service (data, voice and video). The comparative investigation and suitability of various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be accommodated decreases due to increase in bit error rate.

Keywords: BER, PON, TDMPON, GPON, CWDM, OLT, ONT

Procedia PDF Downloads 713
28736 Recovery of Fried Soybean Oil Using Bentonite as an Adsorbent: Optimization, Isotherm and Kinetics Studies

Authors: Prakash Kumar Nayak, Avinash Kumar, Uma Dash, Kalpana Rayaguru

Abstract:

Soybean oil is one of the most widely consumed cooking oils, worldwide. Deep-fat frying of foods at higher temperatures adds unique flavour, golden brown colour and crispy texture to foods. But it brings in various changes like hydrolysis, oxidation, hydrogenation and thermal alteration to oil. The presence of Peroxide value (PV) is one of the most important factors affecting the quality of the deep-fat fried oil. Using bentonite as an adsorbent, the PV can be reduced, thereby improving the quality of the soybean oil. In this study, operating parameters like heating time of oil (10, 15, 20, 25 & 30 h), contact time ( 5, 10, 15, 20, 25 h) and concentration of adsorbent (0.25, 0.5, 0.75, 1.0 and 1.25 g/ 100 ml of oil) have been optimized by response surface methodology (RSM) considering percentage reduction of PV as a response. Adsorption data were analysed by fitting with Langmuir and Freundlich isotherm model. The results show that the Langmuir model shows the best fit compared to the Freundlich model. The adsorption process was also found to follow a pseudo-second-order kinetic model.

Keywords: bentonite, Langmuir isotherm, peroxide value, RSM, soybean oil

Procedia PDF Downloads 359
28735 Deep-Learning Based Approach to Facial Emotion Recognition through Convolutional Neural Network

Authors: Nouha Khediri, Mohammed Ben Ammar, Monji Kherallah

Abstract:

Recently, facial emotion recognition (FER) has become increasingly essential to understand the state of the human mind. Accurately classifying emotion from the face is a challenging task. In this paper, we present a facial emotion recognition approach named CV-FER, benefiting from deep learning, especially CNN and VGG16. First, the data is pre-processed with data cleaning and data rotation. Then, we augment the data and proceed to our FER model, which contains five convolutions layers and five pooling layers. Finally, a softmax classifier is used in the output layer to recognize emotions. Based on the above contents, this paper reviews the works of facial emotion recognition based on deep learning. Experiments show that our model outperforms the other methods using the same FER2013 database and yields a recognition rate of 92%. We also put forward some suggestions for future work.

Keywords: CNN, deep-learning, facial emotion recognition, machine learning

Procedia PDF Downloads 75
28734 Enhancement of Genetic Diversity through Cross Breeding of Two Catfish (Heteropneustes fossilis and Clarias batrachus) in Bangladesh

Authors: M. F. Miah, A. Chakrabarty

Abstract:

Two popular and highly valued fish, Stinging catfish (Heteropneustes fossilis) and Asian catfish (Clarias batrachus) are considered for observing genetic enhancement. Cross breeding was performed considering wild and farmed fish through inducing agent. Five RAPD markers were used to assess genetic diversity among parents and offspring of these two catfish for evaluating genetic enhancement in F1 generation. Considering different genetic data such as banding pattern of DNA, polymorphic loci, polymorphic information content (PIC), inter individual pair wise similarity, Nei genetic similarity, genetic distance, phylogenetic relationships, allele frequency, genotype frequency, intra locus gene diversity and average gene diversity of parents and offspring of these two fish were analyzed and finally in both cases higher genetic diversity was found in F1 generation than the parents.

Keywords: Heteropneustes fossilis, Clarias batrachus, cross breeding, genetic enhancement

Procedia PDF Downloads 234
28733 Data and Biological Sharing Platforms in Community Health Programs: Partnership with Rural Clinical School, University of New South Wales and Public Health Foundation of India

Authors: Vivian Isaac, A. T. Joteeshwaran, Craig McLachlan

Abstract:

The University of New South Wales (UNSW) Rural Clinical School has a strategic collaborative focus on chronic disease and public health. Our objectives are to understand rural environmental and biological interactions in vulnerable community populations. The UNSW Rural Clinical School translational model is a spoke and hub network. This spoke and hub model connects rural data and biological specimens with city based collaborative public health research networks. Similar spoke and hub models are prevalent across research centers in India. The Australia-India Council grant was awarded so we could establish sustainable public health and community research collaborations. As part of the collaborative network we are developing strategies around data and biological sharing platforms between Indian Institute of Public Health, Public Health Foundation of India (PHFI), Hyderabad and Rural Clinical School UNSW. The key objective is to understand how research collaborations are conducted in India and also how data can shared and tracked with external collaborators such as ourselves. A framework to improve data sharing for research collaborations, including DNA was proposed as a project outcome. The complexities of sharing biological data has been investigated via a visit to India. A flagship sustainable project between Rural Clinical School UNSW and PHFI would illustrate a model of data sharing platforms.

Keywords: data sharing, collaboration, public health research, chronic disease

Procedia PDF Downloads 435
28732 The Association between Food Security Status and Depression in Two Iranian Ethnic Groups Living in Northwest of Iran

Authors: A. Rezazadeh, N. Omidvar, H. Eini-Zinab

Abstract:

Food insecurity (FI) influences may result in poor physical and mental health outcomes. Minor ethnic group may experience higher level of FI, and this situation may be related with higher depression prevalence. The aim of this study was to determine the association of depression with food security status in major (Azeri) and minor (Kurdish) ethnicity living in Urmia, West Azerbaijan, north of Iran. In this cross-sectional study, 723 participants (427 women and 296 men) aged 20–64 years old, from two ethnic groups (445 Azeri and 278 Kurdish), were selected through a multi stage cluster systematic sampling. Depression rate was assessed by “Beck” short form questionnaire (validated in Iranians) through interviews. Household FI status (HFIS) was measured using adapted HFI access scale through face-to-face interviews at homes. Multinomial logistic regression was used to estimate odds ratios (OR) of depression across HFIS. Higher percent of Kurds had moderate and severe depression in comparison with Azeri group (73 [17.3%] vs. 86 [27.9%]). There were not any significant differences between the two ethnicities in mild depression. Also, of all the subjects, moderate-to-sever FI was more prevalent in Kurds (28.5%), compared to Azeri group (17.3%) [P < 0.01]. Kurdish ethnic group living in food security or mild FI households had lower chance to have symptom of severe depression in comparison to those with sever FI (OR=0.097; 95% CI: 0.02-0.47). However, there was no significant association between depression and HFI in Azeri group. Findings revealed that the severity of HFI was related with severity depression in minor studied ethnic groups. However, in Azeri ethnicity as a major group, other confounders may have influence on the relation with depression and FI, that were not studied in the present study.

Keywords: depression, ethnicity, food security status, Iran

Procedia PDF Downloads 192
28731 Discrimination of Artificial Intelligence

Authors: Iman Abu-Rub

Abstract:

This research paper examines if Artificial Intelligence is, in fact, racist or not. Different studies from all around the world, and covering different communities were analyzed to further understand AI’s true implications over different communities. The black community, Asian community, and Muslim community were all analyzed and discussed in the paper to figure out if AI is biased or unbiased towards these specific communities. It was found that the biggest problem AI faces is the biased distribution of data collection. Most of the data inserted and coded into AI are of a white male, which significantly affects the other communities in terms of reliable cultural, political, or medical research. Nonetheless, there are various research was done that help increase awareness of this issue, but also solve it completely if done correctly. Governments and big corporations are able to implement different strategies into their AI inventions to avoid any racist results, which could cause hatred culturally but also unreliable data, medically, for example. Overall, Artificial Intelligence is not racist per se, but the data implementation and current racist culture online manipulate AI to become racist.

Keywords: social media, artificial intelligence, racism, discrimination

Procedia PDF Downloads 104
28730 An Exploratory Case Study of Pre-Service Teachers' Learning to Teach Mathematics to Culturally Diverse Students through a Community-Based After-School Field Experience

Authors: Eugenia Vomvoridi-Ivanovic

Abstract:

It is broadly assumed that participation in field experiences will help pre-service teachers (PSTs) bridge theory to practice. However, this is often not the case since PSTs who are placed in classrooms with large numbers of students from diverse linguistic, cultural, racial, and ethnic backgrounds (culturally diverse students (CDS)) usually observe ineffective mathematics teaching practices that are in contrast to those discussed in their teacher preparation program. Over the past decades, the educational research community has paid increasing attention to investigating out-of-school learning contexts and how participation in such contexts can contribute to the achievement of underrepresented groups in Science, Technology, Engineering, and mathematics (STEM) education and their expanded participation in STEM fields. In addition, several research studies have shown that students display different kinds of mathematical behaviors and discourse practices in out-of-school contexts than they do in the typical mathematics classroom since they draw from a variety of linguistic and cultural resources to negotiate meanings and participate in joint problem solving. However, almost no attention has been given to exploring these contexts as field experiences for pre-service mathematics teachers. The purpose of this study was to explore how participation in a community based after-school field experience promotes understanding of the content pedagogy concepts introduced in elementary mathematics methods courses, particularly as they apply to teaching mathematics to CDS. This study draws upon a situated, socio-cultural theory of teacher learning that centers on the concept of learning as situated social practice, which includes discourse, social interaction, and participation structures. Consistent with exploratory case study methodology, qualitative methods were employed to investigate how a cohort of twelve participating pre-service teacher's approach to pedagogy and their conversations around teaching and learning mathematics to CDS evolved through their participation in the after-school field experience, and how they connected the content discussed in their mathematics methods course with their interactions with the CDS in the after-school. Data were collected over a period of one academic year from the following sources: (a) audio recordings of the PSTs' interactions with the students during the after-school sessions, (b) PSTs' after-school field-notes, (c) audio-recordings of weekly methods course meetings, and (d) other document data (e.g., PST and student generated artifacts, PSTs' written course assignments). The findings of this study reveal that the PSTs benefitted greatly through their participation in the after-school field experience. Specifically, after-school participation promoted a deeper understanding of the content pedagogy concepts introduced in the mathematics methods course and gained a greater appreciation for how students learn mathematics with understanding. Further, even though many of PSTs' assumptions about the mathematical abilities of CDS were challenged and PSTs began to view CDSs' cultural and linguistic backgrounds as resources (rather than obstacles) for learning, some PSTs still held negative stereotypes about CDS and teaching and learning mathematics to CDS in particular. Insights gained through this study contribute to a better understanding of how informal mathematics learning contexts may provide a valuable context for pre-service teacher's learning to teach mathematics to CDS.

Keywords: after-school mathematics program, pre-service mathematical education of teachers, qualitative methods, situated socio-cultural theory, teaching culturally diverse students

Procedia PDF Downloads 120
28729 A Neural Network Modelling Approach for Predicting Permeability from Well Logs Data

Authors: Chico Horacio Jose Sambo

Abstract:

Recently neural network has gained popularity when come to solve complex nonlinear problems. Permeability is one of fundamental reservoir characteristics system that are anisotropic distributed and non-linear manner. For this reason, permeability prediction from well log data is well suited by using neural networks and other computer-based techniques. The main goal of this paper is to predict reservoir permeability from well logs data by using neural network approach. A multi-layered perceptron trained by back propagation algorithm was used to build the predictive model. The performance of the model on net results was measured by correlation coefficient. The correlation coefficient from testing, training, validation and all data sets was evaluated. The results show that neural network was capable of reproducing permeability with accuracy in all cases, so that the calculated correlation coefficients for training, testing and validation permeability were 0.96273, 0.89991 and 0.87858, respectively. The generalization of the results to other field can be made after examining new data, and a regional study might be possible to study reservoir properties with cheap and very fast constructed models.

Keywords: neural network, permeability, multilayer perceptron, well log

Procedia PDF Downloads 381
28728 The Mental Workload of Intensive Care Unit Nurses in Performing Human-Machine Tasks: A Cross-Sectional Survey

Authors: Yan Yan, Erhong Sun, Lin Peng, Xuchun Ye

Abstract:

Aims: The present study aimed to explore Intensive Care Unit (ICU) nurses’ mental workload (MWL) and associated factors with it in performing human-machine tasks. Background: A wide range of emerging technologies have penetrated widely in the field of health care, and ICU nurses are facing a dramatic increase in nursing human-machine tasks. However, there is still a paucity of literature reporting on the general MWL of ICU nurses performing human-machine tasks and the associated influencing factors. Methods: A cross-sectional survey was employed. The data was collected from January to February 2021 from 9 tertiary hospitals in 6 provinces (Shanghai, Gansu, Guangdong, Liaoning, Shandong, and Hubei). Two-stage sampling was used to recruit eligible ICU nurses (n=427). The data were collected with an electronic questionnaire comprising sociodemographic characteristics and the measures of MWL, self-efficacy, system usability, and task difficulty. The univariate analysis, two-way analysis of variance (ANOVA), and a linear mixed model were used for data analysis. Results: Overall, the mental workload of ICU nurses in performing human-machine tasks was medium (score 52.04 on a 0-100 scale). Among the typical nursing human-machine tasks selected, the MWL of ICU nurses in completing first aid and life support tasks (‘Using a defibrillator to defibrillate’ and ‘Use of ventilator’) was significantly higher than others (p < .001). And ICU nurses’ MWL in performing human-machine tasks was also associated with age (p = .001), professional title (p = .002), years of working in ICU (p < .001), willingness to study emerging technology actively (p = .006), task difficulty (p < .001), and system usability (p < .001). Conclusion: The MWL of ICU nurses is at a moderate level in the context of a rapid increase in nursing human-machine tasks. However, there are significant differences in MWL when performing different types of human-machine tasks, and MWL can be influenced by a combination of factors. Nursing managers need to develop intervention strategies in multiple ways. Implications for practice: Multidimensional approaches are required to perform human-machine tasks better, including enhancing nurses' willingness to learn emerging technologies actively, developing training strategies that vary with tasks, and identifying obstacles in the process of human-machine system interaction.

Keywords: mental workload, nurse, ICU, human-machine, tasks, cross-sectional study, linear mixed model, China

Procedia PDF Downloads 55
28727 Frequent Itemset Mining Using Rough-Sets

Authors: Usman Qamar, Younus Javed

Abstract:

Frequent pattern mining is the process of finding a pattern (a set of items, subsequences, substructures, etc.) that occurs frequently in a data set. It was proposed in the context of frequent itemsets and association rule mining. Frequent pattern mining is used to find inherent regularities in data. What products were often purchased together? Its applications include basket data analysis, cross-marketing, catalog design, sale campaign analysis, Web log (click stream) analysis, and DNA sequence analysis. However, one of the bottlenecks of frequent itemset mining is that as the data increase the amount of time and resources required to mining the data increases at an exponential rate. In this investigation a new algorithm is proposed which can be uses as a pre-processor for frequent itemset mining. FASTER (FeAture SelecTion using Entropy and Rough sets) is a hybrid pre-processor algorithm which utilizes entropy and rough-sets to carry out record reduction and feature (attribute) selection respectively. FASTER for frequent itemset mining can produce a speed up of 3.1 times when compared to original algorithm while maintaining an accuracy of 71%.

Keywords: rough-sets, classification, feature selection, entropy, outliers, frequent itemset mining

Procedia PDF Downloads 420
28726 Persian Pistachio Nut (Pistacia vera L.) Dehydration in Natural and Industrial Conditions

Authors: Hamid Tavakolipour, Mohsen Mokhtarian, Ahmad Kalbasi Ashtari

Abstract:

In this study, the effect of various drying methods (sun drying, shade drying and industrial drying) on final moisture content, shell splitting degree, shrinkage and color change were studied. Sun drying resulted higher degree of pistachio nuts shell splitting on pistachio nuts relative other drying methods. The ANOVA results showed that the different drying methods did not significantly effects on color change of dried pistachio nut. The results illustrated that pistachio nut dried by industrial drying had the lowest moisture content. After the end of drying process, initially, the experimental drying data were fitted with five famous drying models namely Newton, Page, Silva et al., Peleg and Henderson and Pabis. The results indicated that Peleg and Page models gave better results compared with other models to monitor the moisture ratio’s pistachio nut in industrial drying and open sun (or shade drying) methods, respectively.

Keywords: industrial drying, pistachio, quality properties, traditional drying

Procedia PDF Downloads 318
28725 Application of Regularized Spatio-Temporal Models to the Analysis of Remote Sensing Data

Authors: Salihah Alghamdi, Surajit Ray

Abstract:

Space-time data can be observed over irregularly shaped manifolds, which might have complex boundaries or interior gaps. Most of the existing methods do not consider the shape of the data, and as a result, it is difficult to model irregularly shaped data accommodating the complex domain. We used a method that can deal with space-time data that are distributed over non-planner shaped regions. The method is based on partial differential equations and finite element analysis. The model can be estimated using a penalized least squares approach with a regularization term that controls the over-fitting. The model is regularized using two roughness penalties, which consider the spatial and temporal regularities separately. The integrated square of the second derivative of the basis function is used as temporal penalty. While the spatial penalty consists of the integrated square of Laplace operator, which is integrated exclusively over the domain of interest that is determined using finite element technique. In this paper, we applied a spatio-temporal regression model with partial differential equations regularization (ST-PDE) approach to analyze a remote sensing data measuring the greenness of vegetation, measure by an index called enhanced vegetation index (EVI). The EVI data consist of measurements that take values between -1 and 1 reflecting the level of greenness of some region over a period of time. We applied (ST-PDE) approach to irregular shaped region of the EVI data. The approach efficiently accommodates the irregular shaped regions taking into account the complex boundaries rather than smoothing across the boundaries. Furthermore, the approach succeeds in capturing the temporal variation in the data.

Keywords: irregularly shaped domain, partial differential equations, finite element analysis, complex boundray

Procedia PDF Downloads 129
28724 Women Entrepreneurial Resiliency Amidst COVID-19

Authors: Divya Juneja, Sukhjeet Kaur Matharu

Abstract:

Purpose: The paper is aimed at identifying the challenging factors experienced by the women entrepreneurs in India in operating their enterprises amidst the challenges posed by the COVID-19 pandemic. Methodology: The sample for the study comprised 396 women entrepreneurs from different regions of India. A purposive sampling technique was adopted for data collection. Data was collected through a self-administered questionnaire. Analysis was performed using the SPSS package for quantitative data analysis. Findings: The results of the study state that entrepreneurial characteristics, resourcefulness, networking, adaptability, and continuity have a positive influence on the resiliency of women entrepreneurs when faced with a crisis situation. Practical Implications: The findings of the study have some important implications for women entrepreneurs, organizations, government, and other institutions extending support to entrepreneurs.

Keywords: women entrepreneurs, analysis, data analysis, positive influence, resiliency

Procedia PDF Downloads 100
28723 Partial Least Square Regression for High-Dimentional and High-Correlated Data

Authors: Mohammed Abdullah Alshahrani

Abstract:

The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.

Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data

Procedia PDF Downloads 35
28722 The Use of Voice in Online Public Access Catalog as Faster Searching Device

Authors: Maisyatus Suadaa Irfana, Nove Eka Variant Anna, Dyah Puspitasari Sri Rahayu

Abstract:

Technological developments provide convenience to all the people. Nowadays, the communication of human with the computer is done via text. With the development of technology, human and computer communications have been conducted with a voice like communication between human beings. It provides an easy facility for many people, especially those who have special needs. Voice search technology is applied in the search of book collections in the OPAC (Online Public Access Catalog), so library visitors will find it faster and easier to find books that they need. Integration with Google is needed to convert the voice into text. To optimize the time and the results of searching, Server will download all the book data that is available in the server database. Then, the data will be converted into JSON format. In addition, the incorporation of some algorithms is conducted including Decomposition (parse) in the form of array of JSON format, the index making, analyzer to the result. It aims to make the process of searching much faster than the usual searching in OPAC because the data are directly taken to the database for every search warrant. Data Update Menu is provided with the purpose to enable users perform their own data updates and get the latest data information.

Keywords: OPAC, voice, searching, faster

Procedia PDF Downloads 327
28721 Comparison of Data Reduction Algorithms for Image-Based Point Cloud Derived Digital Terrain Models

Authors: M. Uysal, M. Yilmaz, I. Tiryakioğlu

Abstract:

Digital Terrain Model (DTM) is a digital numerical representation of the Earth's surface. DTMs have been applied to a diverse field of tasks, such as urban planning, military, glacier mapping, disaster management. In the expression of the Earth' surface as a mathematical model, an infinite number of point measurements are needed. Because of the impossibility of this case, the points at regular intervals are measured to characterize the Earth's surface and DTM of the Earth is generated. Hitherto, the classical measurement techniques and photogrammetry method have widespread use in the construction of DTM. At present, RADAR, LiDAR, and stereo satellite images are also used for the construction of DTM. In recent years, especially because of its superiorities, Airborne Light Detection and Ranging (LiDAR) has an increased use in DTM applications. A 3D point cloud is created with LiDAR technology by obtaining numerous point data. However recently, by the development in image mapping methods, the use of unmanned aerial vehicles (UAV) for photogrammetric data acquisition has increased DTM generation from image-based point cloud. The accuracy of the DTM depends on various factors such as data collection method, the distribution of elevation points, the point density, properties of the surface and interpolation methods. In this study, the random data reduction method is compared for DTMs generated from image based point cloud data. The original image based point cloud data set (100%) is reduced to a series of subsets by using random algorithm, representing the 75, 50, 25 and 5% of the original image based point cloud data set. Over the ANS campus of Afyon Kocatepe University as the test area, DTM constructed from the original image based point cloud data set is compared with DTMs interpolated from reduced data sets by Kriging interpolation method. The results show that the random data reduction method can be used to reduce the image based point cloud datasets to 50% density level while still maintaining the quality of DTM.

Keywords: DTM, Unmanned Aerial Vehicle (UAV), uniform, random, kriging

Procedia PDF Downloads 143
28720 Teaching Accounting through Critical Accounting Research: The Origin and Its Relevance to the South African Curriculum

Authors: Rosy Makeresemese Qhosola

Abstract:

South Africa has maintained the effort to uphold its guiding principles in terms of its constitution. The constitution upholds principles such as equity, social justice, peace, freedom and hope, to mention but a few. So, such principles are made to form the basis for any legislation and policies that are in place to guide all fields/departments of government. Education is one of those departments or fields and is expected to abide by such principles as outlined in their policies. Therefore, as expected education policies and legislation outline their intentions to ensure the development of students’ clear critical thinking capacity as well as their creative capacities by creating learning contexts and opportunities that accommodate the effective teaching and learning strategies, that are learner centered and are compatible with the prescripts of a democratic constitution of the country. The paper aims at exploring and analyzing the progress of conventional accounting in terms of its adherence to the effective use of principles of good teaching, as per policy expectations in South Africa. The progress is traced by comparing conventional accounting to Critical Accounting Research (CAR), where the history of accounting as intended in the curriculum of SA and CAR are highlighted. Critical Accounting Research framework is used as a lens and mode of teaching in this paper, since it can create a space for the learning of accounting that is optimal marked by the use of more learner-centred methods of teaching. The Curriculum of South Africa also emphasises the use of more learner-centred methods of teaching that encourage an active and critical approach to learning, rather than rote and uncritical learning of given truths. The study seeks to maintain that conventional accounting is in contrast with principles of good teaching as per South African policy expectations. The paper further maintains that, the possible move beyond it and the adherence to the effective use of good teaching, could be when CAR forms the basis of teaching. Data is generated through Participatory Action Research where the meetings, dialogues and discussions with the focused groups are conducted, which consists of lecturers, students, subject heads, coordinators and NGO’s as well as departmental officials. The results are analysed through Critical Discourse Analysis since it allows for the use of text by participants. The study concludes that any teacher who aspires to achieve in the teaching and learning of accounting should first meet the minimum requirements as stated in the NQF level 4, which forms the basic principles of good teaching and are in line with Critical Accounting Research.

Keywords: critical accounting research, critical discourse analysis, participatory action research, principles of good teaching

Procedia PDF Downloads 290
28719 Exploring Influence Range of Tainan City Using Electronic Toll Collection Big Data

Authors: Chen Chou, Feng-Tyan Lin

Abstract:

Big Data has been attracted a lot of attentions in many fields for analyzing research issues based on a large number of maternal data. Electronic Toll Collection (ETC) is one of Intelligent Transportation System (ITS) applications in Taiwan, used to record starting point, end point, distance and travel time of vehicle on the national freeway. This study, taking advantage of ETC big data, combined with urban planning theory, attempts to explore various phenomena of inter-city transportation activities. ETC, one of government's open data, is numerous, complete and quick-update. One may recall that living area has been delimited with location, population, area and subjective consciousness. However, these factors cannot appropriately reflect what people’s movement path is in daily life. In this study, the concept of "Living Area" is replaced by "Influence Range" to show dynamic and variation with time and purposes of activities. This study uses data mining with Python and Excel, and visualizes the number of trips with GIS to explore influence range of Tainan city and the purpose of trips, and discuss living area delimited in current. It dialogues between the concepts of "Central Place Theory" and "Living Area", presents the new point of view, integrates the application of big data, urban planning and transportation. The finding will be valuable for resource allocation and land apportionment of spatial planning.

Keywords: Big Data, ITS, influence range, living area, central place theory, visualization

Procedia PDF Downloads 265
28718 Effect of Submerged Water Jet's Cross Section Shapes on Mixing Length

Authors: Mohsen Solimani Babarsad, Mohammad Rastgoo, Payam Taheri

Abstract:

One of the important applications of hydraulic jets is used for discharge industrial, agricultural and urban wastewater into the rivers or other ambient water to reduce negative effects of pollutant water. Submerged jets due to turbulent condition can mix large amount of dense pollutant water with ambient flow. This study is conducted to investigate the distribution and length of the mixing zone in hydraulic jet's flow field with change in cross section shapes of nozzle. Toward this end, three shapes of cross section (square, circle and rectangular) and three saline densities current with different concentration are considered in a flume with 600 cm as long, 100 cm as high and 150 cm in width. Various discharges were used to evaluate mixing length for a wide range of densimetric Froude numbers, Frd, from 100 to 550 that is defined at the nozzle. Consequently, the circular nozzle, in comparison with other sections, has a densimetric Froude number 11% higher than square nozzle and 26% higher than rectangular nozzle.

Keywords: hydraulic jet, mixing zone, densimetric Froude number, nozzle

Procedia PDF Downloads 353
28717 Business Survival During Economic Crises: A Comparison Between Family and Non-family Firms

Authors: A. Hayrapetyan, A. Simon, P. Marques, G. Renart

Abstract:

Business survival is a question of greatest interest for any economy. Firm characteristics that can explain or predict performance and, ultimately, business survival become of the greatest significance, as the sustainable longevity of any business can mean health for the future of the country. Family Firms (FFs) are one of the most ubiquitous forms of business worldwide, as more than half of European firms (60%) are considered as family firms. Therefore, the inherent characteristics of FFs are one of the possible explanatory variables for firm survival because FFs have strategic goals that differentiate them from other types of businesses. Although there is literature on the performance of FFs across generations, there are fewer studies on the factors that impact the survival of family and non-family FFs, as there is a lack of data on failed firms. To address this gap, this paper explores the differential survival of family firms versus non-family firms with a representative sample of companies of the region of Catalonia (Northeast of Spain) that were adhoc classified as family or nonfamily firms, as well as classified as failed or surviving, since no census data for family firms or for failed firms is available in Spain. By using the COX regression model on a representative sample of 629 family and non-family firms, this study investigates to what extent financial ratios, such as Liquidity, Solvency Rate can impact business survival, taking into consideration the socioemotional side of family firms, as well as revealing the differences between family and non-family firms. The findings show that the liquidity rate is significant for non-family firm survival, whereas not for family firms. On the other hand, FFs can benefit while having a higher solvency rate. Ultimately, this paper discovers that FFs increase their chances of survival when they are small, as the growth in size starts negatively impacting the socioemotional objectives of the firm. This study proves the existence of significant differences between family and non-family firms’ survival during economic crises, suggesting that the prioritization of emotional wealth creates distinct conditions for both types of firms.

Keywords: COX regression, economy crises, family firm, non-family firm, survival

Procedia PDF Downloads 54
28716 Hydrogeochemical Characteristics of the Different Aquiferous Layers in Oban Basement Complex Area (SE Nigeria)

Authors: Azubuike Ekwere

Abstract:

The shallow and deep aquiferous horizons of the fractured and weathered crystalline basement Oban Massif of south-eastern Nigeria were studied during the dry and wet seasons. The criteria were ascertaining hydrochemistry relative to seasonal and spatial variations across the study area. Results indicate that concentrations of major cations and anions exhibit the order of abundance; Ca>Na>Mg>K and HCO3>SO4>Cl respectively, with minor variations across sampling seasons. Major elements Ca, Mg, Na and K were higher for the shallow aquifers than the deep aquifers across seasons. The major anions Cl, SO4, HCO3, and NO3 were higher for the deep aquifers compared to the shallow ones. Two water types were identified for both aquifer types: Ca-Mg-HCO3 and Ca-Na-Cl-SO4. Most of the parameters considered were within the international limits for drinking, domestic and irrigation purposes. Assessment by use of sodium absorption ratio (SAR), percent sodium (%Na) and the wilcox diagram reveals that the waters are suitable for irrigation purposes.

Keywords: shallow aquifer, deep aquifer, seasonal variation, hydrochemistry, Oban massif, Nigeria

Procedia PDF Downloads 647
28715 Modified Lot Quality Assurance Sampling (LQAS) Model for Quality Assessment of Malaria Parasite Microscopy and Rapid Diagnostic Tests in Kano, Nigeria

Authors: F. Sarkinfada, Dabo N. Tukur, Abbas A. Muaz, Adamu A. Yahuza

Abstract:

Appropriate Quality Assurance (QA) of parasite-based diagnosis of malaria to justify Artemisinin-based Combination Therapy (ACT) is essential for Malaria Programmes. In Low and Middle Income Countries (LMIC), resource constrain appears to be a major challenge in implementing the conventional QA system. We designed and implemented a modified LQAS model for QA of malaria parasite (MP) microscopy and RDT in a State Specialist Hospital (SSH) and a University Health Clinic (UHC) in Kano, Nigeria. The capacities of both facilities for MP microscopy and RDT were assessed before implementing a modified LQAS over a period of 3 months. Quality indicators comprising the qualities of blood film and staining, MP positivity rates, concordance rates, error rates (in terms of false positives and false negatives), sensitivity and specificity were monitored and evaluated. Seventy one percent (71%) of the basic requirements for malaria microscopy was available in both facilities, with the absence of certifies microscopists, SOPs and Quality Assurance mechanisms. A daily average of 16 to 32 blood samples were tested with a blood film staining quality of >70% recorded in both facilities. Using microscopy, the MP positivity rates were 50.46% and 19.44% in SSH and UHS respectively, while the MP positivity rates were 45.83% and 22.78% in SSH and UHS when RDT was used. Higher concordance rates of 88.90% and 93.98% were recorded in SSH and UHC respectively using microscopy, while lower rates of 74.07% and 80.58% in SSH and UHC were recorded when RDT was used. In both facilities, error rates were higher when RDT was used than with microscopy. Sensitivity and specificity were higher when microscopy was used (95% and 84% in SSH; 94% in UHC) than when RDT was used (72% and 76% in SSH; 78% and 81% in UHC). It could be feasible to implement an integrated QA model for MP microscopy and RDT using modified LQAS in Malaria Control Programmes in Low and Middle Income Countries that might have resource constrain for parasite-base diagnosis of malaria to justify ACT treatment.

Keywords: malaria, microscopy, quality assurance, RDT

Procedia PDF Downloads 208
28714 Pharmacogenetics of P2Y12 Receptor Inhibitors

Authors: Ragy Raafat Gaber Attaalla

Abstract:

For cardiovascular illness, oral P2Y12 inhibitors including clopidogrel, prasugrel, and ticagrelor are frequently recommended. Each of these medications has advantages and disadvantages. In the absence of genotyping, it has been demonstrated that the stronger platelet aggregation inhibitors prasugrel and ticagrelor are superior than clopidogrel at preventing significant adverse cardiovascular events following an acute coronary syndrome and percutaneous coronary intervention (PCI). Both, nevertheless, come with a higher risk of bleeding unrelated to a coronary artery bypass. As a prodrug, clopidogrel needs to be bioactivated, principally by the CYP2C19 enzyme. A CYP2C19 no function allele and diminished or absent CYP2C19 enzyme activity are present in about 30% of people. The reduced exposure to the active metabolite of clopidogrel and reduced inhibition of platelet aggregation among clopidogrel-treated carriers of a CYP2C19 no function allele likely contributed to the reduced efficacy of clopidogrel in clinical trials. Clopidogrel's pharmacogenetic results are strongest when used in conjunction with PCI, but evidence for other indications is growing. One of the most typical examples of clinical pharmacogenetic application is CYP2C19 genotype-guided antiplatelet medication following PCI. Guidance is available from expert consensus groups and regulatory bodies to assist with incorporating genetic information into P2Y12 inhibitor prescribing decisions. Here, we examine the data supporting genotype-guided P2Y12 inhibitor selection's effects on clopidogrel response and outcomes and discuss tips for pharmacogenetic implementation. We also discuss procedures for using genotype data to choose P2Y12 inhibitor therapies as well as any unmet research needs. Finally, choosing a P2Y12 inhibitor medication that optimally balances the atherothrombotic and bleeding risks may be influenced by both clinical and genetic factors.

Keywords: inhibitors, cardiovascular events, coronary intervention, pharmacogenetic implementation

Procedia PDF Downloads 94
28713 Performance Analysis of Hierarchical Agglomerative Clustering in a Wireless Sensor Network Using Quantitative Data

Authors: Tapan Jain, Davender Singh Saini

Abstract:

Clustering is a useful mechanism in wireless sensor networks which helps to cope with scalability and data transmission problems. The basic aim of our research work is to provide efficient clustering using Hierarchical agglomerative clustering (HAC). If the distance between the sensing nodes is calculated using their location then it’s quantitative HAC. This paper compares the various agglomerative clustering techniques applied in a wireless sensor network using the quantitative data. The simulations are done in MATLAB and the comparisons are made between the different protocols using dendrograms.

Keywords: routing, hierarchical clustering, agglomerative, quantitative, wireless sensor network

Procedia PDF Downloads 584
28712 A Novel Hybrid Deep Learning Architecture for Predicting Acute Kidney Injury Using Patient Record Data and Ultrasound Kidney Images

Authors: Sophia Shi

Abstract:

Acute kidney injury (AKI) is the sudden onset of kidney damage in which the kidneys cannot filter waste from the blood, requiring emergency hospitalization. AKI patient mortality rate is high in the ICU and is virtually impossible for doctors to predict because it is so unexpected. Currently, there is no hybrid model predicting AKI that takes advantage of two types of data. De-identified patient data from the MIMIC-III database and de-identified kidney images and corresponding patient records from the Beijing Hospital of the Ministry of Health were collected. Using data features including serum creatinine among others, two numeric models using MIMIC and Beijing Hospital data were built, and with the hospital ultrasounds, an image-only model was built. Convolutional neural networks (CNN) were used, VGG and Resnet for numeric data and Resnet for image data, and they were combined into a hybrid model by concatenating feature maps of both types of models to create a new input. This input enters another CNN block and then two fully connected layers, ending in a binary output after running through Softmax and additional code. The hybrid model successfully predicted AKI and the highest AUROC of the model was 0.953, achieving an accuracy of 90% and F1-score of 0.91. This model can be implemented into urgent clinical settings such as the ICU and aid doctors by assessing the risk of AKI shortly after the patient’s admission to the ICU, so that doctors can take preventative measures and diminish mortality risks and severe kidney damage.

Keywords: Acute kidney injury, Convolutional neural network, Hybrid deep learning, Patient record data, ResNet, Ultrasound kidney images, VGG

Procedia PDF Downloads 116
28711 A Multivariate Analysis of Patent Price Variations in the Emerging United States Patent Auction Market: Role of Patent, Seller, and Bundling Related Characteristics

Authors: Pratheeba Subramanian, Anjula Gurtoo, Mary Mathew

Abstract:

Transaction of patents in emerging patent markets is gaining momentum. Pricing patents for a transaction say patent sale remains a challenge. Patents vary in their pricing with some patents fetching higher prices than others. Sale of patents in portfolios further complicates pricing with multiple patents playing a role in pricing a bundle. In this paper, a set of 138 US patents sold individually as single invention lots and 462 US patents sold in bundles of 120 portfolios are investigated to understand the dynamics of selling prices of singletons and portfolios and their determinants. Firstly, price variations when patents are sold individually as singletons and portfolios are studied. Multivariate statistical techniques are used for analysis both at the lot level as well as at the individual patent level. The results show portfolios fetching higher prices than singletons at the lot level. However, at the individual patent level singletons show higher prices than per patent price of individual patent members within the portfolio. Secondly, to understand the price determinants, the effect of patent, seller, and bundling related characteristics on selling prices is studied separately for singletons and portfolios. The results show differences in the set of characteristics determining prices of singletons and portfolios. Selling prices of singletons are found to be dependent on the patent related characteristics, unlike portfolios whose prices are found to be dependent on all three aspects – patent, seller, and bundling. The specific patent, seller and bundling characteristics influencing selling price are discussed along with the implications.

Keywords: auction, patents, portfolio bundling, seller type, selling price, singleton

Procedia PDF Downloads 318