Search results for: psychometric validation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1419

Search results for: psychometric validation

309 An ANOVA-based Sequential Forward Channel Selection Framework for Brain-Computer Interface Application based on EEG Signals Driven by Motor Imagery

Authors: Forouzan Salehi Fergeni

Abstract:

Converting the movement intents of a person into commands for action employing brain signals like electroencephalogram signals is a brain-computer interface (BCI) system. When left or right-hand motions are imagined, different patterns of brain activity appear, which can be employed as BCI signals for control. To make better the brain-computer interface (BCI) structures, effective and accurate techniques for increasing the classifying precision of motor imagery (MI) based on electroencephalography (EEG) are greatly needed. Subject dependency and non-stationary are two features of EEG signals. So, EEG signals must be effectively processed before being used in BCI applications. In the present study, after applying an 8 to 30 band-pass filter, a car spatial filter is rendered for the purpose of denoising, and then, a method of analysis of variance is used to select more appropriate and informative channels from a category of a large number of different channels. After ordering channels based on their efficiencies, a sequential forward channel selection is employed to choose just a few reliable ones. Features from two domains of time and wavelet are extracted and shortlisted with the help of a statistical technique, namely the t-test. Finally, the selected features are classified with different machine learning and neural network classifiers being k-nearest neighbor, Probabilistic neural network, support-vector-machine, Extreme learning machine, decision tree, Multi-layer perceptron, and linear discriminant analysis with the purpose of comparing their performance in this application. Utilizing a ten-fold cross-validation approach, tests are performed on a motor imagery dataset found in the BCI competition III. Outcomes demonstrated that the SVM classifier got the greatest classification precision of 97% when compared to the other available approaches. The entire investigative findings confirm that the suggested framework is reliable and computationally effective for the construction of BCI systems and surpasses the existing methods.

Keywords: brain-computer interface, channel selection, motor imagery, support-vector-machine

Procedia PDF Downloads 8
308 Constructing a Semi-Supervised Model for Network Intrusion Detection

Authors: Tigabu Dagne Akal

Abstract:

While advances in computer and communications technology have made the network ubiquitous, they have also rendered networked systems vulnerable to malicious attacks devised from a distance. These attacks or intrusions start with attackers infiltrating a network through a vulnerable host and then launching further attacks on the local network or Intranet. Nowadays, system administrators and network professionals can attempt to prevent such attacks by developing intrusion detection tools and systems using data mining technology. In this study, the experiments were conducted following the Knowledge Discovery in Database Process Model. The Knowledge Discovery in Database Process Model starts from selection of the datasets. The dataset used in this study has been taken from Massachusetts Institute of Technology Lincoln Laboratory. After taking the data, it has been pre-processed. The major pre-processing activities include fill in missed values, remove outliers; resolve inconsistencies, integration of data that contains both labelled and unlabelled datasets, dimensionality reduction, size reduction and data transformation activity like discretization tasks were done for this study. A total of 21,533 intrusion records are used for training the models. For validating the performance of the selected model a separate 3,397 records are used as a testing set. For building a predictive model for intrusion detection J48 decision tree and the Naïve Bayes algorithms have been tested as a classification approach for both with and without feature selection approaches. The model that was created using 10-fold cross validation using the J48 decision tree algorithm with the default parameter values showed the best classification accuracy. The model has a prediction accuracy of 96.11% on the training datasets and 93.2% on the test dataset to classify the new instances as normal, DOS, U2R, R2L and probe classes. The findings of this study have shown that the data mining methods generates interesting rules that are crucial for intrusion detection and prevention in the networking industry. Future research directions are forwarded to come up an applicable system in the area of the study.

Keywords: intrusion detection, data mining, computer science, data mining

Procedia PDF Downloads 269
307 Alternative Approach to the Machine Vision System Operating for Solving Industrial Control Issue

Authors: M. S. Nikitenko, S. A. Kizilov, D. Y. Khudonogov

Abstract:

The paper considers an approach to a machine vision operating system combined with using a grid of light markers. This approach is used to solve several scientific and technical problems, such as measuring the capability of an apron feeder delivering coal from a lining return port to a conveyor in the technology of mining high coal releasing to a conveyor and prototyping an autonomous vehicle obstacle detection system. Primary verification of a method of calculating bulk material volume using three-dimensional modeling and validation in laboratory conditions with relative errors calculation were carried out. A method of calculating the capability of an apron feeder based on a machine vision system and a simplifying technology of a three-dimensional modelled examined measuring area with machine vision was offered. The proposed method allows measuring the volume of rock mass moved by an apron feeder using machine vision. This approach solves the volume control issue of coal produced by a feeder while working off high coal by lava complexes with release to a conveyor with accuracy applied for practical application. The developed mathematical apparatus for measuring feeder productivity in kg/s uses only basic mathematical functions such as addition, subtraction, multiplication, and division. Thus, this fact simplifies software development, and this fact expands the variety of microcontrollers and microcomputers suitable for performing tasks of calculating feeder capability. A feature of an obstacle detection issue is to correct distortions of the laser grid, which simplifies their detection. The paper presents algorithms for video camera image processing and autonomous vehicle model control based on obstacle detection machine vision systems. A sample fragment of obstacle detection at the moment of distortion with the laser grid is demonstrated.

Keywords: machine vision, machine vision operating system, light markers, measuring capability, obstacle detection system, autonomous transport

Procedia PDF Downloads 84
306 An Assessment of Impact of Financial Statement Fraud on Profit Performance of Manufacturing Firms in Nigeria: A Study of Food and Beverage Firms in Nigeria

Authors: Wale Agbaje

Abstract:

The aim of this research study is to assess the impact of financial statement fraud on profitability of some selected Nigerian manufacturing firms covering (2002-2016). The specific objectives focused on to ascertain the effect of incorrect asset valuation on return on assets (ROA) and to ascertain the relationship between improper expense recognition and return on assets (ROA). To achieve these objectives, descriptive research design was used for the study while secondary data were collected from the financial reports of the selected firms and website of security and exchange commission. The analysis of covariance (ANCOVA) was used and STATA II econometric method was used in the analysis of the data. Altman model and operating expenses ratio was adopted in the analysis of the financial reports to create a dummy variable for the selected firms from 2002-2016 and validation of the parameters were ascertained using various statistical techniques such as t-test, co-efficient of determination (R2), F-statistics and Wald chi-square. Two hypotheses were formulated and tested using the t-statistics at 5% level of significance. The findings of the analysis revealed that there is a significant relationship between financial statement fraud and profitability in Nigerian manufacturing industry. It was revealed that incorrect assets valuation has a significant positive relationship and so also is the improper expense recognition on return on assets (ROA) which serves as a proxy for profitability. The implication of this is that distortion of asset valuation and expense recognition leads to decreasing profit in the long run in the manufacturing industry. The study therefore recommended that pragmatic policy options need to be taken in the manufacturing industry to effectively manage incorrect asset valuation and improper expense recognition in order to enhance manufacturing industry performance in the country and also stemming of financial statement fraud should be adequately inculcated into the internal control system of manufacturing firms for the effective running of the manufacturing industry in Nigeria.

Keywords: Althman's Model, improper expense recognition, incorrect asset valuation, return on assets

Procedia PDF Downloads 133
305 Kinematic Analysis of the Calf Raise Test Using a Mobile iOS Application: Validation of the Calf Raise Application

Authors: Ma. Roxanne Fernandez, Josie Athens, Balsalobre-Fernandez, Masayoshi Kubo, Kim Hébert-Losier

Abstract:

Objectives: The calf raise test (CRT) is used in rehabilitation and sports medicine to evaluate calf muscle function. For testing, individuals stand on one leg and go up on their toes and back down to volitional fatigue. The newly developed Calf Raise application (CRapp) for iOS uses computer-vision algorithms enabling objective measurement of CRT outcomes. We aimed to validate the CRapp by examining its concurrent validity and agreement levels against laboratory-based equipment and establishing its intra- and inter-rater reliability. Methods: CRT outcomes (i.e., repetitions, positive work, total height, peak height, fatigue index, and peak power) were assessed in thirteen healthy individuals (6 males, 7 females) on three occasions and both legs using the CRapp, 3D motion capture, and force plate technologies simultaneously. Data were extracted from two markers: one placed immediately below the lateral malleolus and another on the heel. Concurrent validity and agreement measures were determined using intraclass correlation coefficients (ICC₃,ₖ), typical errors expressed as coefficient of variations (CV), and Bland-Altman methods to assess biases and precision. Reliability was assessed using ICC3,1 and CV values. Results: Validity of CRapp outcomes was good to excellent across measures for both markers (mean ICC ≥0.878), with precision plots showing good agreement and precision. CV ranged from 0% (repetitions) to 33.3% (fatigue index) and were, on average better for the lateral malleolus marker. Additionally, inter- and intra-rater reliability were excellent (mean ICC ≥0.949, CV ≤5.6%). Conclusion: These results confirm the CRapp is valid and reliable within and between users for measuring CRT outcomes in healthy adults. The CRapp provides a tool to objectivise CRT outcomes in research and practice, aligning with recent advances in mobile technologies and their increased use in healthcare.

Keywords: calf raise test, mobile application, validity, reliability

Procedia PDF Downloads 144
304 Development of a Tilt-Rotor Aircraft Model Using System Identification Technique

Authors: Ferdinando Montemari, Antonio Vitale, Nicola Genito, Giovanni Cuciniello

Abstract:

The introduction of tilt-rotor aircraft into the existing civilian air transportation system will provide beneficial effects due to tilt-rotor capability to combine the characteristics of a helicopter and a fixed-wing aircraft into one vehicle. The disposability of reliable tilt-rotor simulation models supports the development of such vehicle. Indeed, simulation models are required to design automatic control systems that increase safety, reduce pilot's workload and stress, and ensure the optimal aircraft configuration with respect to flight envelope limits, especially during the most critical flight phases such as conversion from helicopter to aircraft mode and vice versa. This article presents a process to build a simplified tilt-rotor simulation model, derived from the analysis of flight data. The model aims to reproduce the complex dynamics of tilt-rotor during the in-flight conversion phase. It uses a set of scheduled linear transfer functions to relate the autopilot reference inputs to the most relevant rigid body state variables. The model also computes information about the rotor flapping dynamics, which are useful to evaluate the aircraft control margin in terms of rotor collective and cyclic commands. The rotor flapping model is derived through a mixed theoretical-empirical approach, which includes physical analytical equations (applicable to helicopter configuration) and parametric corrective functions. The latter are introduced to best fit the actual rotor behavior and balance the differences existing between helicopter and tilt-rotor during flight. Time-domain system identification from flight data is exploited to optimize the model structure and to estimate the model parameters. The presented model-building process was applied to simulated flight data of the ERICA Tilt-Rotor, generated by using a high fidelity simulation model implemented in FlightLab environment. The validation of the obtained model was very satisfying, confirming the validity of the proposed approach.

Keywords: flapping dynamics, flight dynamics, system identification, tilt-rotor modeling and simulation

Procedia PDF Downloads 173
303 Development and Validation of the Circular Economy Scale

Authors: Yu Fang Chen, Jeng Fung Hung

Abstract:

This study aimed to develop a circular economy scale to assess the level of recognition among high-level executives in businesses regarding the circular economy. The circular economy is crucial for global ESG sustainable development and poses a challenge for corporate social responsibility. The aim of promoting the circular economy is to reduce resource consumption, move towards sustainable development, reduce environmental impact, maintain ecological balance, increase economic value, and promote employment. This study developed a 23-item Circular Economy Scale, which includes three subscales: "Understanding of Circular Economy by Enterprises" (8 items), "Attitudes" (9 items), and "Behaviors" (6 items). The Likert 5-point scale was used to measure responses, with higher scores indicating higher levels of agreement among senior executives with regard to the circular economy. The study tested 105 senior executives and used a structural equation model (SEM) as a measurement indicator to determine the extent to which potential variables were measured. The standard factor loading of the measurement indicator needs to be higher than 0.7, and the average variance explained (AVE) represents the index of convergent validity, which should be greater than 0.5 or at least 0.45 to be acceptable. Out of the 23 items, 12 did not meet the standard, so they were removed, leaving 5 items, 3 items, and 3 items for each of the three subscales, respectively, all with a factor loading greater than 0.7. The AVE for all three subscales was greater than 0.45, indicating good construct validity. The Cronbach's α reliability values for the three subscales were 0.887, 0.787, and 0.734, respectively, and the total scale was 0.860, all of which were higher than 0.7, indicating good reliability. The Circular Economy Scale developed in this study measures three conceptual components that align with the theoretical framework of the literature review and demonstrate good reliability and validity. It can serve as a measurement tool for evaluating the degree of acceptance of the circular economy among senior executives in enterprises. In the future, this scale can be used by senior executives in enterprises as an evaluation tool to further explore its impact on sustainable development and to promote circular economy and sustainable development based on the reference provided.

Keywords: circular economy, corporate social responsibility, scale development, structural equation model

Procedia PDF Downloads 52
302 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults

Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter

Abstract:

Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.

Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization

Procedia PDF Downloads 122
301 Transcriptomics Analysis on Comparing Non-Small Cell Lung Cancer versus Normal Lung, and Early Stage Compared versus Late-Stages of Non-Small Cell Lung Cancer

Authors: Achitphol Chookaew, Paramee Thongsukhsai, Patamarerk Engsontia, Narongwit Nakwan, Pritsana Raugrut

Abstract:

Lung cancer is one of the most common malignancies and primary cause of death due to cancer worldwide. Non-small cell lung cancer (NSCLC) is the main subtype in which majority of patients present with advanced-stage disease. Herein, we analyzed differentially expressed genes to find potential biomarkers for lung cancer diagnosis as well as prognostic markers. We used transcriptome data from our 2 NSCLC patients and public data (GSE81089) composing of 8 NSCLC and 10 normal lung tissues. Differentially expressed genes (DEGs) between NSCLC and normal tissue and between early-stage and late-stage NSCLC were analyzed by the DESeq2. Pairwise correlation was used to find the DEGs with false discovery rate (FDR) adjusted p-value £ 0.05 and |log2 fold change| ³ 4 for NSCLC versus normal and FDR adjusted p-value £ 0.05 with |log2 fold change| ³ 2 for early versus late-stage NSCLC. Bioinformatic tools were used for functional and pathway analysis. Moreover, the top ten genes in each comparison group were verified the expression and survival analysis via GEPIA. We found 150 up-regulated and 45 down-regulated genes in NSCLC compared to normal tissues. Many immnunoglobulin-related genes e.g., IGHV4-4, IGHV5-10-1, IGHV4-31, IGHV4-61, and IGHV1-69D were significantly up-regulated. 22 genes were up-regulated, and five genes were down-regulated in late-stage compared to early-stage NSCLC. The top five DEGs genes were KRT6B, SPRR1A, KRT13, KRT6A and KRT5. Keratin 6B (KRT6B) was the most significantly increased gene in the late-stage NSCLC. From GEPIA analysis, we concluded that IGHV4-31 and IGKV1-9 might be used as diagnostic biomarkers, while KRT6B and KRT6A might be used as prognostic biomarkers. However, further clinical validation is needed.

Keywords: differentially expressed genes, early and late-stages, gene ontology, non-small cell lung cancer transcriptomics

Procedia PDF Downloads 85
300 Inheritance, Stability, and Validation of Provitamin a Markers in Striga Hermonthica-Resistant Maize

Authors: Fiston Masudi Tambwe, Lwanga Charles, Arfang Badji, Unzimai Innocent

Abstract:

The development of maize varieties combining Provitamin A (PVA), high yield, and Striga resistance is an effective and affordable strategy to contribute to food security in sub-Saharan Africa, where maize is a staple food crop. There has been limited research on introgressing PVA genes into Striga-resistant maize genotypes. The objectives of this study were to: i) determine the mode of gene action controlling PVA carotenoid accumulation in Striga-resistant maize, ii) identify Striga-resistant maize hybrids with high PVA content and stable yield, and iii) validate the presence of PVA functional markers in offspring. Six elite, Striga-resistant inbred females were crossed with six high-PVA inbred males in a North Carolina Design II and their offspring were evaluated in four environments, following a 5x8 alpha lattice design with four hybrid checks. Results revealed that both additive and non-additive gene action control carotenoid accumulation in the present study, with a predominance of non-additive gene effects for PVA. Hybrids STR1004xCLHP0352 and STR1004xCLHP0046 - identified as Striga-resistant because they supported fewer Striga plants – were the highest-yielding genotypes with a moderate PVA concentration of 5.48 and 5.77 µg/g, respectively. However, those two hybrids were not stable in terms of yield across all environments. Hybrid STR1007xCLHP0046, however, supported fewer Striga plants, had a yield of 4.52 T/ha, a PVA concentration of 4.52 µg/g, and was also stable. Gel-based marker systems of CrtRB1 and LCYE were used to screen the hybrids and favorable alleles of CrtRB1 primers were detected in 20 hybrids, confirming good levels of PVA carotenoids. Hybrids with favorable alleles of LCYE had the highest concentration of non-PVA carotenoids. These findings will contribute to the development of high-yielding PVA-rich maize varieties in Uganda.

Keywords: gene action, stability, striga resistance, provitamin A markers, beta-carotene hydroxylase 1, CrtRB1, beta-carotene, beta-cryptoxanthin, lycopene epsilon cyclase, LCYE

Procedia PDF Downloads 42
299 Derivation of Bathymetry from High-Resolution Satellite Images: Comparison of Empirical Methods through Geographical Error Analysis

Authors: Anusha P. Wijesundara, Dulap I. Rathnayake, Nihal D. Perera

Abstract:

Bathymetric information is fundamental importance to coastal and marine planning and management, nautical navigation, and scientific studies of marine environments. Satellite-derived bathymetry data provide detailed information in areas where conventional sounding data is lacking and conventional surveys are inaccessible. The two empirical approaches of log-linear bathymetric inversion model and non-linear bathymetric inversion model are applied for deriving bathymetry from high-resolution multispectral satellite imagery. This study compares these two approaches by means of geographical error analysis for the site Kankesanturai using WorldView-2 satellite imagery. Based on the Levenberg-Marquardt method calibrated the parameters of non-linear inversion model and the multiple-linear regression model was applied to calibrate the log-linear inversion model. In order to calibrate both models, Single Beam Echo Sounding (SBES) data in this study area were used as reference points. Residuals were calculated as the difference between the derived depth values and the validation echo sounder bathymetry data and the geographical distribution of model residuals was mapped. The spatial autocorrelation was calculated by comparing the performance of the bathymetric models and the results showing the geographic errors for both models. A spatial error model was constructed from the initial bathymetry estimates and the estimates of autocorrelation. This spatial error model is used to generate more reliable estimates of bathymetry by quantifying autocorrelation of model error and incorporating this into an improved regression model. Log-linear model (R²=0.846) performs better than the non- linear model (R²=0.692). Finally, the spatial error models improved bathymetric estimates derived from linear and non-linear models up to R²=0.854 and R²=0.704 respectively. The Root Mean Square Error (RMSE) was calculated for all reference points in various depth ranges. The magnitude of the prediction error increases with depth for both the log-linear and the non-linear inversion models. Overall RMSE for log-linear and the non-linear inversion models were ±1.532 m and ±2.089 m, respectively.

Keywords: log-linear model, multi spectral, residuals, spatial error model

Procedia PDF Downloads 274
298 Awareness about Authenticity of Health Care Information from Internet Sources among Health Care Students in Malaysia: A Teaching Hospital Study

Authors: Renjith George, Preethy Mary Donald

Abstract:

Use of internet sources to retrieve health care related information among health care professionals has increased tremendously as the accessibility to internet is made easier through smart phones and tablets. Though there are huge data available at a finger touch, it is doubtful whether all the sources providing health care information adhere to evidence based practice. The objective of this survey was to study the prevalence of use of internet sources to get health care information, to assess the mind-set towards the authenticity of health care information available via internet sources and to study the awareness about evidence based practice in health care among medical and dental students in Melaka-Manipal Medical College. The survey was proposed as there is limited number of studies reported in the literature and this is the first of its kind in Malaysia. A cross sectional survey was conducted among the medical and dental students of Melaka-Manipal Medical College. A total of 521 students including medical and dental students in their clinical years of undergraduate study participated in the survey. A questionnaire consisting of 14 questions were constructed based on data available from the published literature and focused group discussion and was pre-tested for validation. Data analysis was done using SPSS. The statistical analysis of the results of the survey proved that the use of internet resources for health care information are equally preferred over the conventional resources among health care students. Though majority of the participants verify the authenticity of information from internet sources, there was considerable percentage of candidates who feels that all the information from the internet can be utilised for clinical decision making or were not aware about the need of verification of authenticity of such information. 63.7 % of the participants rely on evidence based practice in health care for clinical decision making while 34.2 % were not aware about it. A minority of 2.1% did not agree with the concept of evidence based practice. The observations of the survey reveals the increasing use of internet resources for health care information among health care students. The results warrants the need to move towards evidence based practice in health care as all health care information available online may not be reliable. The health care person should be judicious while utilising the information from such resources for clinical decision making.

Keywords: authenticity, evidence based practice, health care information, internet

Procedia PDF Downloads 418
297 Application of Transportation Models for Analysing Future Intercity and Intracity Travel Patterns in Kuwait

Authors: Srikanth Pandurangi, Basheer Mohammed, Nezar Al Sayegh

Abstract:

In order to meet the increasing demand for housing care for Kuwaiti citizens, the government authorities in Kuwait are undertaking a series of projects in the form of new large cities, outside the current urban area. Al Mutlaa City located to the north-west of the Kuwait Metropolitan Area is one such project out of the 15 planned new cities. The city accommodates a wide variety of residential developments, employment opportunities, commercial, recreational, health care and institutional uses. This paper examines the application of comprehensive transportation demand modeling works undertaken in VISUM platform to understand the future intracity and intercity travel distribution patterns in Kuwait. The scope of models developed varied in levels of detail: strategic model update, sub-area models representing future demand of Al Mutlaa City, sub-area models built to estimate the demand in the residential neighborhoods of the city. This paper aims at offering model update framework that facilitates easy integration between sub-area models and strategic national models for unified traffic forecasts. This paper presents the transportation demand modeling results utilized in informing the planning of multi-modal transportation system for Al Mutlaa City. This paper also presents the household survey data collection efforts undertaken using GPS devices (first time in Kuwait) and notebook computer based digital survey forms for interviewing representative sample of citizens and residents. The survey results formed the basis of estimating trip generation rates and trip distribution coefficients used in the strategic base year model calibration and validation process.

Keywords: innovative methods in transportation data collection, integrated public transportation system, traffic forecasts, transportation modeling, travel behavior

Procedia PDF Downloads 193
296 Fam111b Gene Dysregulation Contributes to the Malignancy in Fibrosarcoma, Poor Clinical Outcomes in Poiktmp and a Low-cost Method for Its Mutation Screening

Authors: Cenza Rhoda, Falone Sunda, Elvis Kidzeru, Nonhlanhla P. Khumalo, Afolake Arowolo

Abstract:

Introduction: The human FAM111B gene mutations are associated with POIKTMP, a rare multi-organ fibrosing disease. Recent studies also reported the overexpression of FAM111B in specific cancers. However, the role of FAM111B in these pathologies, particularly fibrosarcoma, remains unknown. Materials and Methods: FAM111B RNA expression in some cancer cell lines was assessed in silico and validated in vitro in these cell lines and skin fibroblasts derived from the South African family member affected by POIKTMP with the heterozygous FAM111B gene mutation: NM_198947.4: c.1861T>G (p. Tyr621Asp or Y621D) by qPCR and western blot. The cellular function of FAM111B was also studied in HT1080 using various cell-based functional assays and a simple and cost-effective PCR-RFLP method for genotyping/screening FAM111B gene mutations described. Results: Expression studies showed upregulated FAM111B mRNA and protein in the cancer cells. High FAM111B expression with robust nuclear localization occurred in HT1080. Additionally, expression data and cell-based assays indicated that FAM111B led to the upregulation of cell migration and decreased cell apoptosis and cell proliferation modulation. FAM111B Y621D mutation showed similar effects on cell migration but minimal impact on cell apoptosis. FAM111B mRNA and protein expression were markedly downregulated (p ≤ 0.05) in the patient's skin-derived fibroblasts. Lastly, the PCR-RFLP method successfully genotyped FAM111B Y621D gene mutation. Discussion: FAM111B is a cancer-associated nuclear protein: Its modulation by mutations may enhance cell migration and proliferation and decrease apoptosis, as seen in cancers and POIKTMP/fibrosis, thus representing a viable therapeutic target in these disorders. Furthermore, the PCR-RFLP method could prove a valuable tool for FAM111B mutation validation or screening in resource-constrained laboratories.

Keywords: FAM111B, POIKTMP, cancer, fibrosis, PCR-RFLP

Procedia PDF Downloads 97
295 Factors Militating the Organization of Intramural Sport Programs in Secondary Schools: A Case Study of the Ekiti West Local Government Area of Ekiti State, Nigeria

Authors: Adewole Taiwo Adelabu

Abstract:

The study investigated the factors militating the organization of intramural sports programs in secondary schools in Ekiti State, Nigeria. The purpose of the study was to identify the factors affecting the organization of sports in secondary schools and also to proffer possible solutions to these factors. The study employed the inferential statistics of chi-square (x2). Five research hypotheses were formulated. The population for the study was all the students in the government-owned secondary schools in Ekiti West Local Government of Ekiti State Nigeria. The sample for the study was 60 students in three schools within the local government selected through simple random sampling techniques. The instrument used for the study was a self-developed questionnaire by the researcher for data collection. The instrument was presented to experts and academicians in the field of Human Kinetics and Health Education for construct and content validation. A reliability test was conducted which involves 10 students who are not part of the study. The test-retest coefficient of 0.74 was obtained which attested to the fact that the instrument was reliable enough for the study. The validated questionnaire was administered to the students in their various schools by the researcher with the help of two research assistants; the questionnaires were filled and returned to the researcher immediately. The data collected were analyzed using the descriptive statistics of frequency count, percentage and mean to analyze demographic data in section A of the questionnaire, while inferential statistics of chi-square was used to test the hypotheses at 0.05 alpha level. The results of the study revealed that personnel, fund, schedule (time) were significant factors that affect the organization of intramural sport programs among students in secondary schools in Ekiti West Local Government Area of the State. The study also revealed that organization of intramural sports programs among students of secondary schools will improve and motivate students’ participation in sports beyond the local level. However, facilities and equipment is not a significant factor affecting the organization of intramural sports among secondary school students in Ekiti West Local Government Area.

Keywords: challenge, intramural sport, militating, programmes

Procedia PDF Downloads 121
294 Comparison with Mechanical Behaviors of Mastication in Teeth Movement Cases

Authors: Jae-Yong Park, Yeo-Kyeong Lee, Hee-Sun Kim

Abstract:

Purpose: This study aims at investigating the mechanical behaviors of mastication, according to various teeth movement. There are three masticatory cases which are general case and 2 cases of teeth movement. General case includes the common arrange of all teeth and 2 cases of teeth movement are that one is the half movement location case of molar teeth in no. 14 tooth seat after extraction of no. 14 tooth and the other is no. 14 tooth seat location case of molar teeth after extraction in the same case before. Materials and Methods: In order to analyze these cases, 3 dimensional finite element (FE) model of the skull were generated based on computed tomography images, 964 dicom files of 38 year old male having normal occlusion status. An FE model in general occlusal case was used to develop CAE procedure. This procedure was applied to FE models in other occlusal cases. The displacement controls according to loading condition were applied effectively to simulate occlusal behaviors in all cases. From the FE analyses, von Mises stress distribution of skull and teeth was observed. The von Mises stress, effective stress, had been widely used to determine the absolute stress value, regardless of stress direction and yield characteristics of materials. Results: High stress was distributed over the periodontal area of mandible under molar teeth when the mandible was transmitted to the coronal-apical direction in the general occlusal case. According to the stress propagation from teeth to cranium, stress distribution decreased as the distribution propagated from molar teeth to infratemporal crest of the greater wing of the sphenoid bone and lateral pterygoid plate in general case. In 2 cases of teeth movement, there were observed that high stresses were distributed over the periodontal area of mandible under teeth where they are located under the moved molar teeth in cranium. Conclusion: The predictions of the mechanical behaviors of general case and 2 cases of teeth movement during the masticatory process were investigated including qualitative validation. The displacement controls as the loading condition were applied effectively to simulate occlusal behaviors in 2 cases of teeth movement of molar teeth.

Keywords: cranium, finite element analysis, mandible, masticatory action, occlusal force

Procedia PDF Downloads 371
293 Computational Pipeline for Lynch Syndrome Detection: Integrating Alignment, Variant Calling, and Annotations

Authors: Rofida Gamal, Mostafa Mohammed, Mariam Adel, Marwa Gamal, Marwa kamal, Ayat Saber, Maha Mamdouh, Amira Emad, Mai Ramadan

Abstract:

Lynch Syndrome is an inherited genetic condition associated with an increased risk of colorectal and other cancers. Detecting Lynch Syndrome in individuals is crucial for early intervention and preventive measures. This study proposes a computational pipeline for Lynch Syndrome detection by integrating alignment, variant calling, and annotation. The pipeline leverages popular tools such as FastQC, Trimmomatic, BWA, bcftools, and ANNOVAR to process the input FASTQ file, perform quality trimming, align reads to the reference genome, call variants, and annotate them. It is believed that the computational pipeline was applied to a dataset of Lynch Syndrome cases, and its performance was evaluated. It is believed that the quality check step ensured the integrity of the sequencing data, while the trimming process is thought to have removed low-quality bases and adaptors. In the alignment step, it is believed that the reads were accurately mapped to the reference genome, and the subsequent variant calling step is believed to have identified potential genetic variants. The annotation step is believed to have provided functional insights into the detected variants, including their effects on known Lynch Syndrome-associated genes. The results obtained from the pipeline revealed Lynch Syndrome-related positions in the genome, providing valuable information for further investigation and clinical decision-making. The pipeline's effectiveness was demonstrated through its ability to streamline the analysis workflow and identify potential genetic markers associated with Lynch Syndrome. It is believed that the computational pipeline presents a comprehensive and efficient approach to Lynch Syndrome detection, contributing to early diagnosis and intervention. The modularity and flexibility of the pipeline are believed to enable customization and adaptation to various datasets and research settings. Further optimization and validation are believed to be necessary to enhance performance and applicability across diverse populations.

Keywords: Lynch Syndrome, computational pipeline, alignment, variant calling, annotation, genetic markers

Procedia PDF Downloads 45
292 Replacement of the Distorted Dentition of the Cone Beam Computed Tomography Scan Models for Orthognathic Surgery Planning

Authors: T. Almutairi, K. Naudi, N. Nairn, X. Ju, B. Eng, J. Whitters, A. Ayoub

Abstract:

Purpose: At present Cone Beam Computed Tomography (CBCT) imaging does not record dental morphology accurately due to the scattering produced by metallic restorations and the reported magnification. The aim of this pilot study is the development and validation of a new method for the replacement of the distorted dentition of CBCT scans with the dental image captured by the digital intraoral camera. Materials and Method: Six dried skulls with orthodontics brackets on the teeth were used in this study. Three intra-oral markers made of dental stone were constructed which were attached to orthodontics brackets. The skulls were CBCT scanned, and occlusal surface was captured using TRIOS® 3D intraoral scanner. Marker based and surface based registrations were performed to fuse the digital intra-oral scan(IOS) into the CBCT models. This produced a new composite digital model of the skull and dentition. The skulls were scanned again using the commercially accurate Laser Faro® arm to produce the 'gold standard' model for the assessment of the accuracy of the developed method. The accuracy of the method was assessed by measuring the distance between the occlusal surfaces of the new composite model and the 'gold standard' 3D model of the skull and teeth. The procedure was repeated a week apart to measure the reproducibility of the method. Results: The results showed no statistically significant difference between the measurements on the first and second occasions. The absolute mean distance between the new composite model and the laser model ranged between 0.11 mm to 0.20 mm. Conclusion: The dentition of the CBCT can be accurately replaced with the dental image captured by the intra-oral scanner to create a composite model. This method will improve the accuracy of orthognathic surgical prediction planning, with the final goal of the fabrication of a physical occlusal wafer without to guide orthognathic surgery and eliminate the need for dental impression.

Keywords: orthognathic surgery, superimposition, models, cone beam computed tomography

Procedia PDF Downloads 167
291 3-Dimensional Contamination Conceptual Site Model: A Case Study Illustrating the Multiple Applications of Developing and Maintaining a 3D Contamination Model during an Active Remediation Project on a Former Urban Gasworks Site

Authors: Duncan Fraser

Abstract:

A 3-Dimensional (3D) conceptual site model was developed using the Leapfrog Works® platform utilising a comprehensive historical dataset for a large former Gasworks site in Fitzroy, Melbourne. The gasworks had been constructed across two fractured geological units with varying hydraulic conductivities. A Newer Volcanic (basaltic) outcrop covered approximately half of the site and was overlying a fractured Melbourne formation (Siltstone) bedrock outcropping over the remaining portion. During the investigative phase of works, a dense non-aqueous phase liquid (DNAPL) plume (coal tar) was identified within both geological units in the subsurface originating from multiple sources, including gasholders, tar wells, condensers, and leaking pipework. The first stage of model development was undertaken to determine the horizontal and vertical extents of the coal tar in the subsurface and assess the potential causality between potential sources, plume location, and site geology. Concentrations of key contaminants of interest (COIs) were also interpolated within Leapfrog to refine the distribution of contaminated soils. The model was subsequently used to develop a robust soil remediation strategy and achieve endorsement from an Environmental Auditor. A change in project scope, following the removal and validation of the three former gasholders, necessitated the additional excavation of a significant volume of residual contaminated rock to allow for the future construction of two-story underground basements. To assess financial liabilities associated with the offsite disposal or thermal treatment of material, the 3D model was updated with three years of additional analytical data from the active remediation phase of works. Chemical concentrations and the residual tar plume within the rock fractures were modelled to pre-classify the in-situ material and enhance separation strategies to prevent the unnecessary treatment of material and reduce costs.

Keywords: 3D model, contaminated land, Leapfrog, remediation

Procedia PDF Downloads 106
290 Monte Carlo Simulation of Thyroid Phantom Imaging Using Geant4-GATE

Authors: Parimalah Velo, Ahmad Zakaria

Abstract:

Introduction: Monte Carlo simulations of preclinical imaging systems allow opportunity to enable new research that could range from designing hardware up to discovery of new imaging application. The simulation system which could accurately model an imaging modality provides a platform for imaging developments that might be inconvenient in physical experiment systems due to the expense, unnecessary radiation exposures and technological difficulties. The aim of present study is to validate the Monte Carlo simulation of thyroid phantom imaging using Geant4-GATE for Siemen’s e-cam single head gamma camera. Upon the validation of the gamma camera simulation model by comparing physical characteristic such as energy resolution, spatial resolution, sensitivity, and dead time, the GATE simulation of thyroid phantom imaging is carried out. Methods: A thyroid phantom is defined geometrically which comprises of 2 lobes with 80mm in diameter, 1 hot spot, and 3 cold spots. This geometry accurately resembling the actual dimensions of thyroid phantom. A planar image of 500k counts with 128x128 matrix size was acquired using simulation model and in actual experimental setup. Upon image acquisition, quantitative image analysis was performed by investigating the total number of counts in image, the contrast of the image, radioactivity distributions on image and the dimension of hot spot. Algorithm for each quantification is described in detail. The difference in estimated and actual values for both simulation and experimental setup is analyzed for radioactivity distribution and dimension of hot spot. Results: The results show that the difference between contrast level of simulation image and experimental image is within 2%. The difference in the total count between simulation and actual study is 0.4%. The results of activity estimation show that the relative difference between estimated and actual activity for experimental and simulation is 4.62% and 3.03% respectively. The deviation in estimated diameter of hot spot for both simulation and experimental study are similar which is 0.5 pixel. In conclusion, the comparisons show good agreement between the simulation and experimental data.

Keywords: gamma camera, Geant4 application of tomographic emission (GATE), Monte Carlo, thyroid imaging

Procedia PDF Downloads 250
289 Health Risk Assessment of Exposing to Benzene in Office Building around a Chemical Industry Based on Numerical Simulation

Authors: Majid Bayatian, Mohammadreza Ashouri

Abstract:

Releasing hazardous chemicals is one of the major problems for office buildings in the chemical industry and, therefore, environmental risks are inherent to these environments. The adverse health effects of the airborne concentration of benzene have been a matter of significant concern, especially in oil refineries. The chronic and acute adverse health effects caused by benzene exposure have attracted wide attention. Acute exposure to benzene through inhalation could cause headaches, dizziness, drowsiness, and irritation of the skin. Chronic exposures have reported causing aplastic anemia and leukemia at the occupational settings. Association between chronic occupational exposure to benzene and the development of aplastic anemia and leukemia were documented by several epidemiological studies. Numerous research works have investigated benzene emissions and determined benzene concentration at different locations of the refinery plant and stated considerable health risks. The high cost of industrial control measures requires justification through lifetime health risk assessment of exposed workers and the public. In the present study, a Computational Fluid Dynamics (CFD) model has been proposed to assess the exposure risk of office building around a refinery due to its release of benzene. For simulation, GAMBIT, FLUENT, and CFD Post software were used as pre-processor, processor, and post-processor, and the model was validated based on comparison with experimental results of benzene concentration and wind speed. Model validation results showed that the model is highly validated, and this model can be used for health risk assessment. The simulation and risk assessment results showed that benzene could be dispersion to an office building nearby, and the exposure risk has been unacceptable. According to the results of this study, a validated CFD model, could be very useful for decision-makers for control measures and possibly support them for emergency planning of probable accidents. Also, this model can be used to assess exposure to various types of accidents as well as other pollutants such as toluene, xylene, and ethylbenzene in different atmospheric conditions.

Keywords: health risk assessment, office building, Benzene, numerical simulation, CFD

Procedia PDF Downloads 102
288 Development and Validation of a Coronary Heart Disease Risk Score in Indian Type 2 Diabetes Mellitus Patients

Authors: Faiz N. K. Yusufi, Aquil Ahmed, Jamal Ahmad

Abstract:

Diabetes in India is growing at an alarming rate and the complications caused by it need to be controlled. Coronary heart disease (CHD) is one of the complications that will be discussed for prediction in this study. India has the second most number of diabetes patients in the world. To the best of our knowledge, there is no CHD risk score for Indian type 2 diabetes patients. Any form of CHD has been taken as the event of interest. A sample of 750 was determined and randomly collected from the Rajiv Gandhi Centre for Diabetes and Endocrinology, J.N.M.C., A.M.U., Aligarh, India. Collected variables include patients data such as sex, age, height, weight, body mass index (BMI), blood sugar fasting (BSF), post prandial sugar (PP), glycosylated haemoglobin (HbA1c), diastolic blood pressure (DBP), systolic blood pressure (SBP), smoking, alcohol habits, total cholesterol (TC), triglycerides (TG), high density lipoprotein (HDL), low density lipoprotein (LDL), very low density lipoprotein (VLDL), physical activity, duration of diabetes, diet control, history of antihypertensive drug treatment, family history of diabetes, waist circumference, hip circumference, medications, central obesity and history of CHD. Predictive risk scores of CHD events are designed by cox proportional hazard regression. Model calibration and discrimination is assessed from Hosmer Lemeshow and area under receiver operating characteristic (ROC) curve. Overfitting and underfitting of the model is checked by applying regularization techniques and best method is selected between ridge, lasso and elastic net regression. Youden’s index is used to choose the optimal cut off point from the scores. Five year probability of CHD is predicted by both survival function and Markov chain two state model and the better technique is concluded. The risk scores for CHD developed can be calculated by doctors and patients for self-control of diabetes. Furthermore, the five-year probabilities can be implemented as well to forecast and maintain the condition of patients.

Keywords: coronary heart disease, cox proportional hazard regression, ROC curve, type 2 diabetes Mellitus

Procedia PDF Downloads 191
287 Influential Parameters in Estimating Soil Properties from Cone Penetrating Test: An Artificial Neural Network Study

Authors: Ahmed G. Mahgoub, Dahlia H. Hafez, Mostafa A. Abu Kiefa

Abstract:

The Cone Penetration Test (CPT) is a common in-situ test which generally investigates a much greater volume of soil more quickly than possible from sampling and laboratory tests. Therefore, it has the potential to realize both cost savings and assessment of soil properties rapidly and continuously. The principle objective of this paper is to demonstrate the feasibility and efficiency of using artificial neural networks (ANNs) to predict the soil angle of internal friction (Φ) and the soil modulus of elasticity (E) from CPT results considering the uncertainties and non-linearities of the soil. In addition, ANNs are used to study the influence of different parameters and recommend which parameters should be included as input parameters to improve the prediction. Neural networks discover relationships in the input data sets through the iterative presentation of the data and intrinsic mapping characteristics of neural topologies. General Regression Neural Network (GRNN) is one of the powerful neural network architectures which is utilized in this study. A large amount of field and experimental data including CPT results, plate load tests, direct shear box, grain size distribution and calculated data of overburden pressure was obtained from a large project in the United Arab Emirates. This data was used for the training and the validation of the neural network. A comparison was made between the obtained results from the ANN's approach, and some common traditional correlations that predict Φ and E from CPT results with respect to the actual results of the collected data. The results show that the ANN is a very powerful tool. Very good agreement was obtained between estimated results from ANN and actual measured results with comparison to other correlations available in the literature. The study recommends some easily available parameters that should be included in the estimation of the soil properties to improve the prediction models. It is shown that the use of friction ration in the estimation of Φ and the use of fines content in the estimation of E considerable improve the prediction models.

Keywords: angle of internal friction, cone penetrating test, general regression neural network, soil modulus of elasticity

Procedia PDF Downloads 398
286 Transformation to M-Learning at the Nursing Institute in the Armed Force Hospital Alhada, in Saudi Arabia Based on Activity Theory

Authors: Rahimah Abdulrahman, A. Eardle, Wilfred Alan, Abdel Hamid Soliman

Abstract:

With the rapid development in technology, and advances in learning technologies, m-learning has begun to occupy a great part of our lives. The pace of the life getting together with the need for learning started mobile learning (m-learning) concept. In 2008, Saudi Arabia requested a national plan for the adoption of information technology (IT) across the country. Part of the recommendations of this plan concerns the implementation of mobile learning (m-learning) as well as their prospective applications to higher education within the Kingdom of Saudi Arabia. The overall aim of the research is to explore the main issues that impact the deployment of m-learning in nursing institutes in Saudi Arabia, at the Armed Force Hospitals (AFH), Alhada. This is in order to be able to develop a generic model to enable and assist the educational policy makers and implementers of m-learning, to comprehend and treat those issues effectively. Specifically, the research will explore the concept of m-learning; identify and analyse the main organisational; technological and cultural issue, that relate to the adoption of m-learning; develop a model of m-learning; investigate the perception of the students of the Nursing Institutes to the use of m-learning technologies for their nursing diploma programmes based on their experiences; conduct a validation of the m-learning model with the use of the nursing Institute of the AFH, Alhada in Saudi Arabia, and evaluate the research project as a learning experience and as a contribution to the body of knowledge. Activity Theory (AT) will be adopted for the study due to the fact that it provides a conceptual framework that engenders an understanding of the structure, development and the context of computer-supported activities. The study will be adopt a set of data collection methods which engage nursing students in a quantitative survey, while nurse teachers are engaged through in depth qualitative studies to get first-hand information about the organisational, technological and cultural issues that impact on the deployment of m-learning. The original contribution will be a model for developing m-learning material for classroom-based learning in the nursing institute that can have a general application.

Keywords: activity theory (at), mobile learning (m-learning), nursing institute, Saudi Arabia (sa)

Procedia PDF Downloads 329
285 The Development of a Cyber Violence Measurement Tool for Youths: A Multi-Reporting of Ecological Factors

Authors: Jong-Hyo Park, Eunyoung Choi, Jae-Yeon Lim, Seon-Suk Lee, Yeong-Rong Koo, Ji-Ung Kwon, Kyung-Sung Kim, Jong-Ik Lee, Juhan Park, Hyun-Kyu Lee, Won-Kyoung Oh, Jisang Lee, Jiwon Choe

Abstract:

Due to COVID-19, cyber violence among youths has soared as they spend more time online than before. In contrast to the deepening concerns, measurement tools that can assess the vulnerability of cyber violence in individual youths still need to be supplemented. The measurement tools lack consideration of various factors related to cyber violence among youths. Most of the tools are self-report questionnaires, and these adolescents' self-report questionnaire forms can underestimate the harmful behavior and overestimate the damage experience. Therefore, this study aims to develop a multi-report measurement tool for youths that can reliably measure individuals' ecological factors related to cyber violence. The literature review explored factors related to cyber violence, and the questions were constructed. The face validity of the questions was confirmed by conducting focus group interviews. Exploratory and confirmatory factor analyses (N=671) were also conducted for statistical validation. This study developed a multi-report measurement tool for cyber violence with 161 questions, consisting of six domains: online behavior, cyber violence awareness, victimization-perpetration-witness experience, coping efficacy (individuals, peers, teachers, and parents), psychological characteristics, and pro-social capabilities. In addition to self-report from a youth respondent, this measurement tool includes peers, teachers, and parents reporting for the respondent. It is possible to reliably measure the ecological factors of individual youths who are vulnerable or highly resistant to cyber violence. In schools, teachers could refer to the measurement results for guiding students, better understanding their cyber violence conditions, and assessing their pro-social capabilities. With the measurement results, teachers and police officers could detect perpetrators or victims and intervene immediately. In addition, this measurement tool could analyze the effects of the prevention and intervention programs for cyber violence and draw appropriate suggestions.

Keywords: adolescents, cyber violence, cyber violence measurement tool, measurement tool, multi-report measurement tool, youths

Procedia PDF Downloads 79
284 Optimisation of Stored Alcoholic Beverage Joufinai with Reverse Phase HPLC Method and Its Antioxidant Activities: North- East India

Authors: Dibakar Chandra Deka, Anamika Kalita Deka

Abstract:

Fermented alcoholic beverage production has its own stand among the tribal communities of North-East India. This biological oxidation method is followed by Ahom, Dimasa, Nishi, Miri, Bodo, Rabha tribes of this region. Bodo tribes among them not only prepare fermented alcoholic beverage but also store it for various time periods like 3 months, 6 months, 9 months, 12 months and 15 months etc. They prepare alcoholic beverage Jou (rice beer) following the fermentation of Oryza sativa with traditional yeast culture Amao. Saccharomyces cerevisiae is the main domain strain present in Amao. Dongphangrakep (Scoparia dulcis), Mwkhna (Clerodendrum viscosum), Thalir (Musa balbisina) and Khantal Bilai (Ananas cosmos) are the main plants used for Amao preparation. The stored Jou is known as Joufinai. They store the fermented mixture (rice and Amao) in anaerobic conditions for the preparation of Joufinai. We observed a successive increase in alcohol content from 3 months of storage period with 11.79 ± 0.010 (%, v/v) to 15.48 ± 0.070 (%, v/v) at 15 months of storage by a simple, reproducible and solution based colorimetric method. A positive linear correlation was also observed between pH and ethanol content with storage having correlation coefficient 0.981. Here, we optimised the detection of change in constituents of Joufinai during storage using reverse phase HPLC method. We found acetone, ethanol, acetic acid, glycerol as main constituents present in Joufinai. A very good correlation was observed from 3 months to 15 months of storage periods with its constituents. Increase in glycerol content was also detected with storage periods and hence Joufinai can be use as a precursor of above stated compounds. We also observed antioxidant activities increase from 0.056 ±2.80 mg/mL for 3 months old to 0.078± 5.33 mg/mL (in ascorbic acid equivalents) for 15 month old beverage by DPPH radical scavenging method. Therefore, we aimed for scientific validation of storage procedure used by Bodos in Joufinai production and to convert the Bodos’ traditional alcoholic beverage to a commercial commodity through our study.

Keywords: Amao, correlation, beverage, joufinai

Procedia PDF Downloads 289
283 A Comparative Analysis of Innovation Maturity Models: Towards the Development of a Technology Management Maturity Model

Authors: Nikolett Deutsch, Éva Pintér, Péter Bagó, Miklós Hetényi

Abstract:

Strategic technology management has emerged and evolved parallelly with strategic management paradigms. It focuses on the opportunity for organizations operating mainly in technology-intensive industries to explore and exploit technological capabilities upon which competitive advantage can be obtained. As strategic technology management involves multifunction within an organization, requires broad and diversified knowledge, and must be developed and implemented with business objectives to enable a firm’s profitability and growth, excellence in strategic technology management provides unique opportunities for organizations in terms of building a successful future. Accordingly, a framework supporting the evaluation of the technological readiness level of management can significantly contribute to developing organizational competitiveness through a better understanding of strategic-level capabilities and deficiencies in operations. In the last decade, several innovation maturity assessment models have appeared and become designated management tools that can serve as references for future practical approaches expected to be used by corporate leaders, strategists, and technology managers to understand and manage technological capabilities and capacities. The aim of this paper is to provide a comprehensive review of the state-of-the-art innovation maturity frameworks, to investigate the critical lessons learned from their application, to identify the similarities and differences among the models, and identify the main aspects and elements valid for the field and critical functions of technology management. To this end, a systematic literature review was carried out considering the relevant papers and articles published in highly ranked international journals around the 27 most widely known innovation maturity models from four relevant digital sources. Key findings suggest that despite the diversity of the given models, there is still room for improvement regarding the common understanding of innovation typologies, the full coverage of innovation capabilities, and the generalist approach to the validation and practical applicability of the structure and content of the models. Furthermore, the paper proposes an initial structure by considering the maturity assessment of the technological capacities and capabilities - i.e., technology identification, technology selection, technology acquisition, technology exploitation, and technology protection - covered by strategic technology management.

Keywords: innovation capabilities, innovation maturity models, technology audit, technology management, technology management maturity models

Procedia PDF Downloads 26
282 Using the Smith-Waterman Algorithm to Extract Features in the Classification of Obesity Status

Authors: Rosa Figueroa, Christopher Flores

Abstract:

Text categorization is the problem of assigning a new document to a set of predetermined categories, on the basis of a training set of free-text data that contains documents whose category membership is known. To train a classification model, it is necessary to extract characteristics in the form of tokens that facilitate the learning and classification process. In text categorization, the feature extraction process involves the use of word sequences also known as N-grams. In general, it is expected that documents belonging to the same category share similar features. The Smith-Waterman (SW) algorithm is a dynamic programming algorithm that performs a local sequence alignment in order to determine similar regions between two strings or protein sequences. This work explores the use of SW algorithm as an alternative to feature extraction in text categorization. The dataset used for this purpose, contains 2,610 annotated documents with the classes Obese/Non-Obese. This dataset was represented in a matrix form using the Bag of Word approach. The score selected to represent the occurrence of the tokens in each document was the term frequency-inverse document frequency (TF-IDF). In order to extract features for classification, four experiments were conducted: the first experiment used SW to extract features, the second one used unigrams (single word), the third one used bigrams (two word sequence) and the last experiment used a combination of unigrams and bigrams to extract features for classification. To test the effectiveness of the extracted feature set for the four experiments, a Support Vector Machine (SVM) classifier was tuned using 20% of the dataset. The remaining 80% of the dataset together with 5-Fold Cross Validation were used to evaluate and compare the performance of the four experiments of feature extraction. Results from the tuning process suggest that SW performs better than the N-gram based feature extraction. These results were confirmed by using the remaining 80% of the dataset, where SW performed the best (accuracy = 97.10%, weighted average F-measure = 97.07%). The second best was obtained by the combination of unigrams-bigrams (accuracy = 96.04, weighted average F-measure = 95.97) closely followed by the bigrams (accuracy = 94.56%, weighted average F-measure = 94.46%) and finally unigrams (accuracy = 92.96%, weighted average F-measure = 92.90%).

Keywords: comorbidities, machine learning, obesity, Smith-Waterman algorithm

Procedia PDF Downloads 271
281 Computational Study on Traumatic Brain Injury Using Magnetic Resonance Imaging-Based 3D Viscoelastic Model

Authors: Tanu Khanuja, Harikrishnan N. Unni

Abstract:

Head is the most vulnerable part of human body and may cause severe life threatening injuries. As the in vivo brain response cannot be recorded during injury, computational investigation of the head model could be really helpful to understand the injury mechanism. Majority of the physical damage to living tissues are caused by relative motion within the tissue due to tensile and shearing structural failures. The present Finite Element study focuses on investigating intracranial pressure and stress/strain distributions resulting from impact loads on various sites of human head. This is performed by the development of the 3D model of a human head with major segments like cerebrum, cerebellum, brain stem, CSF (cerebrospinal fluid), and skull from patient specific MRI (magnetic resonance imaging). The semi-automatic segmentation of head is performed using AMIRA software to extract finer grooves of the brain. To maintain the accuracy high number of mesh elements are required followed by high computational time. Therefore, the mesh optimization has also been performed using tetrahedral elements. In addition, model validation with experimental literature is performed as well. Hard tissues like skull is modeled as elastic whereas soft tissues like brain is modeled with viscoelastic prony series material model. This paper intends to obtain insights into the severity of brain injury by analyzing impacts on frontal, top, back, and temporal sites of the head. Yield stress (based on von Mises stress criterion for tissues) and intracranial pressure distribution due to impact on different sites (frontal, parietal, etc.) are compared and the extent of damage to cerebral tissues is discussed in detail. This paper finds that how the back impact is more injurious to overall head than the other. The present work would be helpful to understand the injury mechanism of traumatic brain injury more effectively.

Keywords: dynamic impact analysis, finite element analysis, intracranial pressure, MRI, traumatic brain injury, von Misses stress

Procedia PDF Downloads 138
280 Determination of Bromides, Chlorides and Fluorides in Case of Their Joint Presence in Ion-Conducting Electrolyte

Authors: V. Golubeva, O. Vakhnina, I. Konopkina, N. Gerasimova, N. Taturina, K. Zhogova

Abstract:

To improve chemical current sources, the ion-conducting electrolytes based on Li halides (LiCl-KCl, LiCl-LiBr-KBr, LiCl-LiBr-LiF) are developed. It is necessary to have chemical analytical methods for determination of halides to control the electrolytes technology. The methods of classical analytical chemistry are of interest, as they are characterized by high accuracy. Using these methods is a difficult task because halides have similar chemical properties. The objective of this work is to develop a titrimetric method for determining the content of bromides, chlorides, and fluorides in their joint presence in an ion-conducting electrolyte. In accordance with the developed method of analysis to determine fluorides, electrolyte sample is dissolved in diluted HCl acid; fluorides are titrated by La(NO₃)₃ solution with potentiometric indication of equivalence point, fluoride ion-selective electrode is used as sensor. Chlorides and bromides do not form a hardly soluble compound with La and do not interfere in result of analysis. To determine the bromides, the sample is dissolved in a diluted H₂SO₄ acid. The bromides are oxidized with a solution of KIO₃ to Br₂, which is removed from the reaction zone by boiling. Excess of KIO₃ is titrated by iodometric method. The content of bromides is calculated from the amount of KIO₃ spent on Br₂ oxidation. Chlorides and fluorides are not oxidized by KIO₃ and do not interfere in result of analysis. To determine the chlorides, the sample is dissolved in diluted HNO₃ acid and the total content of chlorides and bromides is determined by method of visual mercurometric titration with diphenylcarbazone indicator. Fluorides do not form a hardly soluble compound with mercury and do not interfere with determination. The content of chlorides is calculated taking into account the content of bromides in the sample of electrolyte. The validation of the developed analytical method was evaluated by analyzing internal reference material with known chlorides, bromides and fluorides content. The analytical method allows to determine chlorides, bromides and fluorides in case of their joint presence in ion-conducting electrolyte within the range and with relative total error (δ): for bromides from 60.0 to 65.0 %, δ = ± 2.1 %; for chlorides from 8.0 to 15.0 %, δ = ± 3.6 %; for fluorides from 5.0 to 8.0%, ± 1.5% . The analytical method allows to analyze electrolytes and mixtures that contain chlorides, bromides, fluorides of alkali metals and their mixtures (K, Na, Li).

Keywords: bromides, chlorides, fluorides, ion-conducting electrolyte

Procedia PDF Downloads 105