Search results for: graphics processing units
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5129

Search results for: graphics processing units

389 Long Short-Term Memory Stream Cruise Control Method for Automated Drift Detection and Adaptation

Authors: Mohammad Abu-Shaira, Weishi Shi

Abstract:

Adaptive learning, a commonly employed solution to drift, involves updating predictive models online during their operation to react to concept drifts, thereby serving as a critical component and natural extension for online learning systems that learn incrementally from each example. This paper introduces LSTM-SCCM “Long Short-Term Memory Stream Cruise Control Method”, a drift adaptation-as-a-service framework for online learning. LSTM-SCCM automates drift adaptation through prompt detection, drift magnitude quantification, dynamic hyperparameter tuning, performing shortterm optimization and model recalibration for immediate adjustments, and, when necessary, conducting long-term model recalibration to ensure deeper enhancements in model performance. LSTM-SCCM is incorporated into a suite of cutting-edge online regression models, assessing their performance across various types of concept drift using diverse datasets with varying characteristics. The findings demonstrate that LSTM-SCCM represents a notable advancement in both model performance and efficacy in handling concept drift occurrences. LSTM-SCCM stands out as the sole framework adept at effectively tackling concept drifts within regression scenarios. Its proactive approach to drift adaptation distinguishes it from conventional reactive methods, which typically rely on retraining after significant degradation to model performance caused by drifts. Additionally, LSTM-SCCM employs an in-memory approach combined with the Self-Adjusting Memory (SAM) architecture to enhance real-time processing and adaptability. The framework incorporates variable thresholding techniques and does not assume any particular data distribution, making it an ideal choice for managing high-dimensional datasets and efficiently handling large-scale data. Our experiments, which include abrupt, incremental, and gradual drifts across both low- and high-dimensional datasets with varying noise levels, and applied to four state-of-the-art online regression models, demonstrate that LSTM-SCCM is versatile and effective, rendering it a valuable solution for online regression models to address concept drift.

Keywords: automated drift detection and adaptation, concept drift, hyperparameters optimization, online and adaptive learning, regression

Procedia PDF Downloads 11
388 Simulation and Characterization of Stretching and Folding in Microchannel Electrokinetic Flows

Authors: Justo Rodriguez, Daming Chen, Amador M. Guzman

Abstract:

The detection, treatment, and control of rapidly propagating, deadly viruses such as COVID-19, require the development of inexpensive, fast, and accurate devices to address the urgent needs of the population. Microfluidics-based sensors are amongst the different methods and techniques for detection that are easy to use. A micro analyzer is defined as a microfluidics-based sensor, composed of a network of microchannels with varying functions. Given their size, portability, and accuracy, they are proving to be more effective and convenient than other solutions. A micro analyzer based on the concept of “Lab on a Chip” presents advantages concerning other non-micro devices due to its smaller size, and it is having a better ratio between useful area and volume. The integration of multiple processes in a single microdevice reduces both the number of necessary samples and the analysis time, leading the next generation of analyzers for the health-sciences. In some applications, the flow of solution within the microchannels is originated by a pressure gradient, which can produce adverse effects on biological samples. A more efficient and less dangerous way of controlling the flow in a microchannel-based analyzer is applying an electric field to induce the fluid motion and either enhance or suppress the mixing process. Electrokinetic flows are characterized by no less than two non-dimensional parameters: the electric Rayleigh number and its geometrical aspect ratio. In this research, stable and unstable flows have been studied numerically (and when possible, will be experimental) in a T-shaped microchannel. Additionally, unstable electrokinetic flows for Rayleigh numbers higher than critical have been characterized. The flow mixing enhancement was quantified in relation to the stretching and folding that fluid particles undergo when they are subjected to supercritical electrokinetic flows. Computational simulations were carried out using a finite element-based program while working with the flow mixing concepts developed by Gollub and collaborators. Hundreds of seeded massless particles were tracked along the microchannel from the entrance to exit for both stable and unstable flows. After post-processing, their trajectories, the folding and stretching values for the different flows were found. Numerical results show that for supercritical electrokinetic flows, the enhancement effects of the folding and stretching processes become more apparent. Consequently, there is an improvement in the mixing process, ultimately leading to a more homogenous mixture.

Keywords: microchannel, stretching and folding, electro kinetic flow mixing, micro-analyzer

Procedia PDF Downloads 126
387 Revolutionizing Healthcare Facility Maintenance: A Groundbreaking AI, BIM, and IoT Integration Framework

Authors: Mina Sadat Orooje, Mohammad Mehdi Latifi, Behnam Fereydooni Eftekhari

Abstract:

The integration of cutting-edge Internet of Things (IoT) technologies with advanced Artificial Intelligence (AI) systems is revolutionizing healthcare facility management. However, the current landscape of hospital building maintenance suffers from slow, repetitive, and disjointed processes, leading to significant financial, resource, and time losses. Additionally, the potential of Building Information Modeling (BIM) in facility maintenance is hindered by a lack of data within digital models of built environments, necessitating a more streamlined data collection process. This paper presents a robust framework that harmonizes AI with BIM-IoT technology to elevate healthcare Facility Maintenance Management (FMM) and address these pressing challenges. The methodology begins with a thorough literature review and requirements analysis, providing insights into existing technological landscapes and associated obstacles. Extensive data collection and analysis efforts follow to deepen understanding of hospital infrastructure and maintenance records. Critical AI algorithms are identified to address predictive maintenance, anomaly detection, and optimization needs alongside integration strategies for BIM and IoT technologies, enabling real-time data collection and analysis. The framework outlines protocols for data processing, analysis, and decision-making. A prototype implementation is executed to showcase the framework's functionality, followed by a rigorous validation process to evaluate its efficacy and gather user feedback. Refinement and optimization steps are then undertaken based on evaluation outcomes. Emphasis is placed on the scalability of the framework in real-world scenarios and its potential applications across diverse healthcare facility contexts. Finally, the findings are meticulously documented and shared within the healthcare and facility management communities. This framework aims to significantly boost maintenance efficiency, cut costs, provide decision support, enable real-time monitoring, offer data-driven insights, and ultimately enhance patient safety and satisfaction. By tackling current challenges in healthcare facility maintenance management it paves the way for the adoption of smarter and more efficient maintenance practices in healthcare facilities.

Keywords: artificial intelligence, building information modeling, healthcare facility maintenance, internet of things integration, maintenance efficiency

Procedia PDF Downloads 59
386 From the Perspective of a Veterinarian: The Future of Plant Raw Materials Used in the Feeding of Farm Animals

Authors: Ertuğrul Yılmaz

Abstract:

One of the most important occupational groups in the food chain from farm to fork is a veterinary medicine. This occupational group, which has important duties in the prevention of many zoonotic diseases and in public health, takes place in many critical control points from soil to our kitchen. It has important duties from mycotoxins transmitted from the soil to slaughterhouses or milk processing facilities. Starting from the soil, which constitutes 70% of mycotoxin contamination, up to the TMR made from raw materials obtained from the soil, there are all critical control points from feeding to slaughterhouses and milk production enterprises. We can take the precaution of mycotoxins such as Aflatoxin B1, Ochratoxin, Zearalenone, and Fumonisin, which we encounter on farms while in the field. It has been reported that aflatoxin B1 is a casenerogen and passes into milk in studies. It is likely that many mycotoxins pose significant threats to public health and will turn out to be even more dangerous over time. Even raw material storage and TMR preparation are very important for public health. The danger of fumonisin accumulating in the liver will be understood over time. Zoonotic diseases are also explained with examples. In this study, how important veterinarians are in terms of public health is explained with examples. In the two-year mycotoxin screenings, fumonisin mycotoxin was found to be very high in corn and corn by-products, and it was determined that it accumulated in the liver for a long time and remained cornic in animals. It has been determined that mycotoxins are present in all livestock feeds, poultry feeds, and raw materials, not alone, but in double-triple form. Starting from the end, mycotoxin scans should be carried out from feed to raw materials and from raw materials to soil. In this way, we prevent the transmission of mycotoxins to animals and from animals to humans. Liver protectors such as toxin binders, beta-glucan, mannan oligosaccharides, activated carbon, prebiotics, and silymarin were used in certain proportions in the total mixed ratio, and positive results were obtained. Humidity and temperature controls of raw material silos were made at certain intervals. Necropsy was performed on animals that died as a result of mycotoxicosis, and macroscopic photographs were taken of the organs. We have determined that the mycotoxin screening in experimental animals and the feeds made without detecting the presence and amount of bacterial factors affect the results of the project to be made. For this, a series of precautionary plans have been created, starting from the production processes.

Keywords: mycotoxins, feed safety, processes, public health

Procedia PDF Downloads 84
385 Vibrational Spectra and Nonlinear Optical Investigations of a Chalcone Derivative (2e)-3-[4-(Methylsulfanyl) Phenyl]-1-(3-Bromophenyl) Prop-2-En-1-One

Authors: Amit Kumar, Archana Gupta, Poonam Tandon, E. D. D’Silva

Abstract:

Nonlinear optical (NLO) materials are the key materials for the fast processing of information and optical data storage applications. In the last decade, materials showing nonlinear optical properties have been the object of increasing attention by both experimental and computational points of view. Chalcones are one of the most important classes of cross conjugated NLO chromophores that are reported to exhibit good SHG efficiency, ultra fast optical nonlinearities and are easily crystallizable. The basic structure of chalcones is based on the π-conjugated system in which two aromatic rings are connected by a three-carbon α, β-unsaturated carbonyl system. Due to the overlap of π orbitals, delocalization of electronic charge distribution leads to a high mobility of the electron density. On a molecular scale, the extent of charge transfer across the NLO chromophore determines the level of SHG output. Hence, the functionalization of both ends of the π-bond system with appropriate electron donor and acceptor groups can enhance the asymmetric electronic distribution in either or both ground and excited states, leading to an increased optical nonlinearity. In this research, the experimental and theoretical study on the structure and vibrations of (2E)-3-[4-(methylsulfanyl) phenyl]-1-(3-bromophenyl) prop-2-en-1-one (3Br4MSP) is presented. The FT-IR and FT-Raman spectra of the NLO material in the solid phase have been recorded. Density functional theory (DFT) calculations at B3LYP with 6-311++G(d,p) basis set were carried out to study the equilibrium geometry, vibrational wavenumbers, infrared absorbance and Raman scattering activities. The interpretation of vibrational features (normal mode assignments, for instance) has an invaluable aid from DFT calculations that provide a quantum-mechanical description of the electronic energies and forces involved. Perturbation theory allows one to obtain the vibrational normal modes by estimating the derivatives of the Kohn−Sham energy with respect to atomic displacements. The molecular hyperpolarizability β plays a chief role in the NLO properties, and a systematical study on β has been carried out. Furthermore, the first order hyperpolarizability (β) and the related properties such as dipole moment (μ) and polarizability (α) of the title molecule are evaluated by Finite Field (FF) approach. The electronic α and β of the studied molecule are 41.907×10-24 and 79.035×10-24 e.s.u. respectively, indicating that 3Br4MSP can be used as a good nonlinear optical material.

Keywords: DFT, MEP, NLO, vibrational spectra

Procedia PDF Downloads 221
384 Innovative Practices That Have Significantly Scaled up Depot Medroxy Progesterone Acetate-SC Self-Inject Services

Authors: Oluwaseun Adeleke, Samuel O. Ikani, Fidelis Edet, Anthony Nwala, Mopelola Raji, Simeon Christian Chukwu

Abstract:

Background The Delivering Innovations in Selfcare (DISC) project promotes universal access to quality selfcare services beginning with subcutaneous depot medroxy progesterone acetate (DMPA-SC) contraceptive self-injection (SI) option. Self-inject (SI) offers women a highly effective and convenient option that saves them frequent trips to providers. Its increased use has the potential to improve the efficiency of an overstretched healthcare system by reducing provider workloads. State Social and Behavioral Change Communications (SBCC) Officers lead project demand creation and service delivery innovations that have resulted in significant increases in SI uptake among women who opt for injectables. Strategies Service Delivery Innovations The implementation of the "Moment of Truth (MoT)" innovation helped providers overcome biases and address client fear and reluctance to self-inject. Bi-annual program audits and supportive mentoring visits helped providers retain their competence and motivation. Proper documentation, tracking, and replenishment of commodities were ensured through effective engagement with State Logistics Units. The project supported existing state monitoring and evaluation structures to effectively record and report subcutaneous depot medroxy progesterone acetate (DMPA-SC) service utilization. Demand creation Innovations SBCC Officers provide oversight, routinely evaluate performance, trains, and provides feedback for the demand creation activities implemented by community mobilizers (CMs). The scope and intensity of training given to CMs affect the outcome of their work. The project operates a demand creation model that uses a schedule to inform the conduct of interpersonal and group events. Health education sessions are specifically designed to counter misinformation, address questions and concerns, and educate target audience in an informed choice context. The project mapped facilities and their catchment areas and enlisted the support of identified influencers and gatekeepers to enlist their buy-in prior to entry. Each mobilization event began with pre-mobilization sensitization activities, particularly targeting male groups. Context-specific interventions were informed by the religious, traditional, and cultural peculiarities of target communities. Mobilizers also support clients to engage with and navigate online digital Family Planning (FP) online portals such as DiscoverYourPower website, Facebook page, digital companion (chat bot), interactive voice response (IVR), radio and television (TV) messaging. This improves compliance and provides linkages to nearby facilities. Results The project recorded 136,950 self-injection (SI) visits and a self-injection (SI) proportion rate that increased from 13 percent before the implementation of interventions in 2021 to 62 percent currently. The project cost-effectively demonstrated catalytic impact by leveraging state and partner resources, institutional platforms, and geographic scope to scale up interventions. The project also cost effectively demonstrated catalytic impact by leveraging on the state and partner resources, institutional platforms, and geographic scope to sustainably scale-up these strategies. Conclusion Using evidence-informed iterations of service delivery and demand creation models have been useful to significantly drive self-injection (SI) uptake. It will be useful to consider this implementation model during program design. Contemplation should also be given to systematic and strategic execution of strategies to optimize impact.

Keywords: family planning, contraception, DMPA-SC, self-care, self-injection, innovation, service delivery, demand creation.

Procedia PDF Downloads 75
383 The Use of Geographic Information System in Spatial Location of Waste Collection Points and the Attendant Impacts in Bida Urban Centre, Nigeria

Authors: Daramola Japheth, Tabiti S. Tabiti, Daramola Elizabeth Lara, Hussaini Yusuf Atulukwu

Abstract:

Bida urban centre is faced with solid waste management problems which are evident in the processes of waste generation, onsite storage, collection, transfer and transport, processing and disposal of solid waste. As a result of this the urban centre is defaced with litters of garbage and offensive odours due to indiscriminate dumping of refuse within the neighborhood. The partial removal of the fuel subsidy by the Federal Government in January 2012 leads to the formation of Subsidy Reinvestment Programmes (SURE-P), the Federal Government’s share is 41 per cent of the savings while the States and Local Government shared the remaining 59 percent. The SURE-P Committee in carrying out the mandate entrusted upon it by the President by identifying few critical infrastructure and social Safety nets that will ameliorate the sufferings of Nigerians. Waste disposal programme as an aspect of Solid waste management is one of the areas of focus for Niger State SURE-programmes incorporated under Niger State Environmental Protection Agency. The emergence of this programme as related to waste management in Bida has left behind a huge refuse spots along major corridors leading to a serious state of mess. Major roads within the LGA is now turned to dumping site, thereby obstructing traffic movements, while the aesthetic nature of the town became something else with offensive odours all over. This paper however wishes to underscore the use of geographical Information System in identifying solid waste sports towards effective solid waste management in the Bida urban centre. The paper examined the spatial location of dumping points and its impact on the environment. Hand held Global Position System was use to pick the dumping points location; where a total number of 91 dumping points collected were uploaded to ArcGis 10.2 for analysis. Interview method was used to derive information from households living near the dumping site. It was discovered that the people now have to cope with offensive odours, rodents invasion, dog and cats coming around the house as a result of inadequate and in prompt collection of waste around the neighborhood. The researchers hereby recommend that more points needs to be created with prompt collections of waste within the neighborhood by the necessary SURE - P agencies.

Keywords: dumping site, neighborhood, refuse, waste

Procedia PDF Downloads 529
382 The Outcome of Using Machine Learning in Medical Imaging

Authors: Adel Edwar Waheeb Louka

Abstract:

Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.

Keywords: artificial intelligence, convolutional neural networks, deeplearning, image processing, machine learningSarapin, intraarticular, chronic knee pain, osteoarthritisFNS, trauma, hip, neck femur fracture, minimally invasive surgery

Procedia PDF Downloads 73
381 Prevalence of Breast Cancer Molecular Subtypes at a Tertiary Cancer Institute

Authors: Nahush Modak, Meena Pangarkar, Anand Pathak, Ankita Tamhane

Abstract:

Background: Breast cancer is the prominent cause of cancer and mortality among women. This study was done to show the statistical analysis of a cohort of over 250 patients detected with breast cancer diagnosed by oncologists using Immunohistochemistry (IHC). IHC was performed by using ER; PR; HER2; Ki-67 antibodies. Materials and methods: Formalin fixed Paraffin embedded tissue samples were obtained by surgical manner and standard protocol was followed for fixation, grossing, tissue processing, embedding, cutting and IHC. The Ventana Benchmark XT machine was used for automated IHC of the samples. Antibodies used were supplied by F. Hoffmann-La Roche Ltd. Statistical analysis was performed by using SPSS for windows. Statistical tests performed were chi-squared test and Correlation tests with p<.01. The raw data was collected and provided by National Cancer Insitute, Jamtha, India. Result: Luminal B was the most prevailing molecular subtype of Breast cancer at our institute. Chi squared test of homogeneity was performed to find equality in distribution and Luminal B was the most prevalent molecular subtype. The worse prognostic indicator for breast cancer depends upon expression of Ki-67 and her2 protein in cancerous cells. Our study was done at p <.01 and significant dependence was observed. There exists no dependence of age on molecular subtype of breast cancer. Similarly, age is an independent variable while considering Ki-67 expression. Chi square test performed on Human epidermal growth factor receptor 2 (HER2) statuses of patients and strong dependence was observed in percentage of Ki-67 expression and Her2 (+/-) character which shows that, value of Ki depends upon Her2 expression in cancerous cells (p<.01). Surprisingly, dependence was observed in case of Ki-67 and Pr, at p <.01. This shows that Progesterone receptor proteins (PR) are over-expressed when there is an elevation in expression of Ki-67 protein. Conclusion: We conclude from that Luminal B is the most prevalent molecular subtype at National Cancer Institute, Jamtha, India. There was found no significant correlation between age and Ki-67 expression in any molecular subtype. And no dependence or correlation exists between patients’ age and molecular subtype. We also found that, when the diagnosis is Luminal A, out of the cohort of 257 patients, no patient shows >14% Ki-67 value. Statistically, extremely significant values were observed for dependence of PR+Her2- and PR-Her2+ scores on Ki-67 expression. (p<.01). Her2 is an important prognostic factor in breast cancer. Chi squared test for Her2 and Ki-67 shows that the expression of Ki depends upon Her2 statuses. Moreover, Ki-67 cannot be used as a standalone prognostic factor for determining breast cancer.

Keywords: breast cancer molecular subtypes , correlation, immunohistochemistry, Ki-67 and HR, statistical analysis

Procedia PDF Downloads 123
380 Mechanical Characterization and CNC Rotary Ultrasonic Grinding of Crystal Glass

Authors: Ricardo Torcato, Helder Morais

Abstract:

The manufacture of crystal glass parts is based on obtaining the rough geometry by blowing and/or injection, generally followed by a set of manual finishing operations using cutting and grinding tools. The forming techniques used do not allow the obtainment, with repeatability, of parts with complex shapes and the finishing operations use intensive specialized labor resulting in high cycle times and production costs. This work aims to explore the digital manufacture of crystal glass parts by investigating new subtractive techniques for the automated, flexible finishing of these parts. Finishing operations are essential to respond to customer demands in terms of crystal feel and shine. It is intended to investigate the applicability of different computerized finishing technologies, namely milling and grinding in a CNC machining center with or without ultrasonic assistance, to crystal processing. Research in the field of grinding hard and brittle materials, despite not being extensive, has increased in recent years, and scientific knowledge about the machinability of crystal glass is still very limited. However, it can be said that the unique properties of glass, such as high hardness and very low toughness, make any glass machining technology a very challenging process. This work will measure the performance improvement brought about by the use of ultrasound compared to conventional crystal grinding. This presentation is focused on the mechanical characterization and analysis of the cutting forces in CNC machining of superior crystal glass (Pb ≥ 30%). For the mechanical characterization, the Vickers hardness test provides an estimate of the material hardness (Hv) and the fracture toughness based on cracks that appear in the indentation. Mechanical impulse excitation test estimates the Young’s Modulus, shear modulus and Poisson ratio of the material. For the cutting forces, it a dynamometer was used to measure the forces in the face grinding process. The tests were made based on the Taguchi method to correlate the input parameters (feed rate, tool rotation speed and depth of cut) with the output parameters (surface roughness and cutting forces) to optimize the process (better roughness using the cutting forces that do not compromise the material structure and the tool life) using ANOVA. This study was conducted for conventional grinding and for the ultrasonic grinding process with the same cutting tools. It was possible to determine the optimum cutting parameters for minimum cutting forces and for minimum surface roughness in both grinding processes. Ultrasonic-assisted grinding provides a better surface roughness than conventional grinding.

Keywords: CNC machining, crystal glass, cutting forces, hardness

Procedia PDF Downloads 153
379 Rapid Soil Classification Using Computer Vision, Electrical Resistivity and Soil Strength

Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, Lionel L. J. Ang, Algernon C. S. Hong, Danette S. E. Tan, Grace H. B. Foo, K. Q. Hong, L. M. Cheng, M. L. Leong

Abstract:

This paper presents a novel rapid soil classification technique that combines computer vision with four-probe soil electrical resistivity method and cone penetration test (CPT), to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from local construction projects are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labour-intensive. Thus, a rapid classification method is needed at the SGs. Computer vision, four-probe soil electrical resistivity and CPT were combined into an innovative non-destructive and instantaneous classification method for this purpose. The computer vision technique comprises soil image acquisition using industrial grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). Complementing the computer vision technique, the apparent electrical resistivity of soil (ρ) is measured using a set of four probes arranged in Wenner’s array. It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the soil strength is measured using a modified mini cone penetrometer, and w is measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay” and an even mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay”. It is also found that these parameters can be integrated with the computer vision technique on-site to complete the rapid soil classification in less than three minutes.

Keywords: Computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification

Procedia PDF Downloads 218
378 The Significance of Picture Mining in the Fashion and Design as a New Research Method

Authors: Katsue Edo, Yu Hiroi

Abstract:

T Increasing attention has been paid to using pictures and photographs in research since the beginning of the 21th century in social sciences. Meanwhile we have been studying the usefulness of Picture mining, which is one of the new ways for a these picture using researches. Picture Mining is an explorative research analysis method that takes useful information from pictures, photographs and static or moving images. It is often compared with the methods of text mining. The Picture Mining concept includes observational research in the broad sense, because it also aims to analyze moving images (Ochihara and Edo 2013). In the recent literature, studies and reports using pictures are increasing due to the environmental changes. These are identified as technological and social changes (Edo et.al. 2013). Low price digital cameras and i-phones, high information transmission speed, low costs for information transferring and high performance and resolution of the cameras of mobile phones have changed the photographing behavior of people. Consequently, there is less resistance in taking and processing photographs for most of the people in the developing countries. In these studies, this method of collecting data from respondents is often called as ‘participant-generated photography’ or ‘respondent-generated visual imagery’, which focuses on the collection of data and its analysis (Pauwels 2011, Snyder 2012). But there are few systematical and conceptual studies that supports it significance of these methods. We have discussed in the recent years to conceptualize these picture using research methods and formalize theoretical findings (Edo et. al. 2014). We have identified the most efficient fields of Picture mining in the following areas inductively and in case studies; 1) Research in Consumer and Customer Lifestyles. 2) New Product Development. 3) Research in Fashion and Design. Though we have found that it will be useful in these fields and areas, we must verify these assumptions. In this study we will focus on the field of fashion and design, to determine whether picture mining methods are really reliable in this area. In order to do so we have conducted an empirical research of the respondents’ attitudes and behavior concerning pictures and photographs. We compared the attitudes and behavior of pictures toward fashion to meals, and found out that taking pictures of fashion is not as easy as taking meals and food. Respondents do not often take pictures of fashion and upload their pictures online, such as Facebook and Instagram, compared to meals and food because of the difficulty of taking them. We concluded that we should be more careful in analyzing pictures in the fashion area for there still might be some kind of bias existing even if the environment of pictures have drastically changed in these years.

Keywords: empirical research, fashion and design, Picture Mining, qualitative research

Procedia PDF Downloads 363
377 Tumor Size and Lymph Node Metastasis Detection in Colon Cancer Patients Using MR Images

Authors: Mohammadreza Hedyehzadeh, Mahdi Yousefi

Abstract:

Colon cancer is one of the most common cancer, which predicted to increase its prevalence due to the bad eating habits of peoples. Nowadays, due to the busyness of people, the use of fast foods is increasing, and therefore, diagnosis of this disease and its treatment are of particular importance. To determine the best treatment approach for each specific colon cancer patients, the oncologist should be known the stage of the tumor. The most common method to determine the tumor stage is TNM staging system. In this system, M indicates the presence of metastasis, N indicates the extent of spread to the lymph nodes, and T indicates the size of the tumor. It is clear that in order to determine all three of these parameters, an imaging method must be used, and the gold standard imaging protocols for this purpose are CT and PET/CT. In CT imaging, due to the use of X-rays, the risk of cancer and the absorbed dose of the patient is high, while in the PET/CT method, there is a lack of access to the device due to its high cost. Therefore, in this study, we aimed to estimate the tumor size and the extent of its spread to the lymph nodes using MR images. More than 1300 MR images collected from the TCIA portal, and in the first step (pre-processing), histogram equalization to improve image qualities and resizing to get the same image size was done. Two expert radiologists, which work more than 21 years on colon cancer cases, segmented the images and extracted the tumor region from the images. The next step is feature extraction from segmented images and then classify the data into three classes: T0N0، T3N1 و T3N2. In this article, the VGG-16 convolutional neural network has been used to perform both of the above-mentioned tasks, i.e., feature extraction and classification. This network has 13 convolution layers for feature extraction and three fully connected layers with the softmax activation function for classification. In order to validate the proposed method, the 10-fold cross validation method used in such a way that the data was randomly divided into three parts: training (70% of data), validation (10% of data) and the rest for testing. It is repeated 10 times, each time, the accuracy, sensitivity and specificity of the model are calculated and the average of ten repetitions is reported as the result. The accuracy, specificity and sensitivity of the proposed method for testing dataset was 89/09%, 95/8% and 96/4%. Compared to previous studies, using a safe imaging technique (MRI) and non-use of predefined hand-crafted imaging features to determine the stage of colon cancer patients are some of the study advantages.

Keywords: colon cancer, VGG-16, magnetic resonance imaging, tumor size, lymph node metastasis

Procedia PDF Downloads 59
376 Social Skills as a Significant Aspect of a Successful Start of Compulsory Education

Authors: Eva Šmelová, Alena Berčíková

Abstract:

The issue of school maturity and readiness of a child for a successful start of compulsory education is one of the long-term monitored areas, especially in the context of education and psychology. In the context of the curricular reform in the Czech Republic, the issue has recently gained importance. Analyses of research in this area suggest a lack of a broader overview of indicators informing about the current level of children’s school maturity and school readiness. Instead, various studies address partial issues. Between 2009 and 2013 a research study was performed at the Faculty of Education, Palacký University Olomouc (Czech Republic) focusing on children’s maturity and readiness for compulsory education. In this study, social skills were of marginal interest; the main focus was on the mental area. This previous research is smoothly linked with the present study, the objective of which is to identify the level of school maturity and school readiness in selected characteristics of social skills as part of the adaptation process after enrolment in compulsory education. In this context, the following research question has been formulated: During the process of adaptation to the school environment, which social skills are weakened? The method applied was observation, for the purposes of which the authors developed a research tool – record sheet with 11 items – social skills that a child should have by the end of preschool education. The items were assessed by first-grade teachers at the beginning of the school year. The degree of achievement and intensity of the skills were assessed for each child using an assessment scale. In the research, the authors monitored a total of three independent variables (gender, postponement of school attendance, participation in inclusive education). The effect of these independent variables was monitored using 11 dependent variables. These variables are represented by the results achieved in selected social skills. Statistical data processing was assisted by the Computer Centre of Palacký University Olomouc. Statistical calculations were performed using SPSS v. 12.0 for Windows and STATISTICA: StatSoft STATISTICA CR, Cz (software system for data analysis). The research sample comprised 115 children. In their paper, the authors present the results of the research and at the same time point to possible areas of further investigation. They also highlight possible risks associated with weakened social skills.

Keywords: compulsory education, curricular reform, educational diagnostics, pupil, school curriculum, school maturity, school readiness, social skills

Procedia PDF Downloads 251
375 Historical Development of Negative Emotive Intensifiers in Hungarian

Authors: Martina Katalin Szabó, Bernadett Lipóczi, Csenge Guba, István Uveges

Abstract:

In this study, an exhaustive analysis was carried out about the historical development of negative emotive intensifiers in the Hungarian language via NLP methods. Intensifiers are linguistic elements which modify or reinforce a variable character in the lexical unit they apply to. Therefore, intensifiers appear with other lexical items, such as adverbs, adjectives, verbs, infrequently with nouns. Due to the complexity of this phenomenon (set of sociolinguistic, semantic, and historical aspects), there are many lexical items which can operate as intensifiers. The group of intensifiers are admittedly one of the most rapidly changing elements in the language. From a linguistic point of view, particularly interesting are a special group of intensifiers, the so-called negative emotive intensifiers, that, on their own, without context, have semantic content that can be associated with negative emotion, but in particular cases, they may function as intensifiers (e.g.borzasztóanjó ’awfully good’, which means ’excellent’). Despite their special semantic features, negative emotive intensifiers are scarcely examined in literature based on large Historical corpora via NLP methods. In order to become better acquainted with trends over time concerning the intensifiers, The exhaustively analysed a specific historical corpus, namely the Magyar TörténetiSzövegtár (Hungarian Historical Corpus). This corpus (containing 3 millions text words) is a collection of texts of various genres and styles, produced between 1772 and 2010. Since the corpus consists of raw texts and does not contain any additional information about the language features of the data (such as stemming or morphological analysis), a large amount of manual work was required to process the data. Thus, based on a lexicon of negative emotive intensifiers compiled in a previous phase of the research, every occurrence of each intensifier was queried, and the results were stored in a separate data frame. Then, basic linguistic processing (POS-tagging, lemmatization etc.) was carried out automatically with the ‘magyarlanc’ NLP-toolkit. Finally, the frequency and collocation features of all the negative emotive words were automatically analyzed in the corpus. Outcomes of the research revealed in detail how these words have proceeded through grammaticalization over time, i.e., they change from lexical elements to grammatical ones, and they slowly go through a delexicalization process (their negative content diminishes over time). What is more, it was also pointed out which negative emotive intensifiers are at the same stage in this process in the same time period. Giving a closer look to the different domains of the analysed corpus, it also became certain that during this process, the pragmatic role’s importance increases: the newer use expresses the speaker's subjective, evaluative opinion at a certain level.

Keywords: historical corpus analysis, historical linguistics, negative emotive intensifiers, semantic changes over time

Procedia PDF Downloads 233
374 Reflective Thinking and Experiential Learning – A Quasi-Experimental Quanti-Quali Response to Greater Diversification of Activities, Greater Integration of Student Profiles

Authors: Paulo Sérgio Ribeiro de Araújo Bogas

Abstract:

Although several studies have assumed (at least implicitly) that learners' approaches to learning develop into deeper approaches to higher education, there appears to be no clear theoretical basis for this assumption and no empirical evidence. As a scientific contribution to this discussion, a pedagogical intervention of a quasi-experimental nature was developed, with a mixed methodology, evaluating the intervention within a single curricular unit of Marketing, using cases based on real challenges of brands, business simulation, and customer projects. Primary and secondary experiences were incorporated in the intervention: the primary experiences are the experiential activities themselves; the secondary experiences result from the primary experience, such as reflection and discussion in work teams. A diversified learning relationship was encouraged through the various connections between the different members of the learning community. The present study concludes that in the same context, the student's responses can be described as students who reinforce the initial deep approach, students who maintain the initial deep approach level, and others who change from an emphasis on the deep approach to one closer to superficial. This typology did not always confirm studies reported in the literature, namely, whether the initial level of deep processing would influence the superficial and the opposite. The result of this investigation points to the inclusion of pedagogical and didactic activities that integrate different motivations and initial strategies, leading to the possible adoption of deep approaches to learning since it revealed statistically significant differences in the difference in the scores of the deep/superficial approach and the experiential level. In the case of real challenges, the categories of “attribution of meaning and meaning of studied” and the possibility of “contact with an aspirational context” for their future professional stand out. In this category, the dimensions of autonomy that will be required of them were also revealed when comparing the classroom context of real cases and the future professional context and the impact they may have on the world. Regarding the simulated practice, two categories of response stand out: on the one hand, the motivation associated with the possibility of measuring the results of the decisions taken, an awareness of oneself, and, on the other hand, the additional effort that this practice required for some of the students.

Keywords: experiential learning, higher education, mixed methods, reflective learning, marketing

Procedia PDF Downloads 83
373 Flexural Properties of Typha Fibers Reinforced Polyester Composite

Authors: Sana Rezig, Yosr Ben Mlik, Mounir Jaouadi, Foued Khoffi, Slah Msahli, Bernard Durand

Abstract:

Increasing interest in environmental concerns, natural fibers are once again being considered as reinforcements for polymer composites. The main objective of this study is to explore another natural resource, Typha fiber; which is renewable without production cost and available abundantly in nature. The aim of this study was to study the flexural properties of composite resin with and without reinforcing Typha leaf and stem fibers. The specimens were made by the hand-lay-up process using polyester matrix. In our work, we focused on the effect of various treatment conditions (sea water, alkali treatment and a combination of the two treatments), as a surface modifier, on the flexural properties of the Typha fibers reinforced polyester composites. Moreover, weight ratio of Typha leaf or stem fibers was investigated. Besides, both fibers from leaf and stem of Typha plant were used to evaluate the reinforcing effect. Another parameter, which is reinforcement structure, was investigated. In fact, a first composite was made with air-laid nonwoven structure of fibers. A second composite was with a mixture of fibers and resin for each kind of treatment. Results show that alkali treatment and combined process provided better mechanical properties of composites in comparison with fiber treated by sea water. The fiber weight ratio influenced the flexural properties of composites. Indeed, a maximum value of flexural strength of 69.8 and 62,32 MPa with flexural modulus of 6.16 and 6.34 GPawas observed respectively for composite reinforced with leaf and stem fibers for 12.6 % fiber weight ratio. For the different treatments carried out, the treatment using caustic soda, whether alone or after retting seawater, show the best results because it improves adhesion between the polyester matrix and the fibers of reinforcement. SEM photographs were made to ascertain the effects of the surface treatment of the fibers. By varying the structure of the fibers of Typha, the reinforcement used in bulk shows more effective results as that used in the non-woven structure. In addition, flexural strength rises with about (65.32 %) in the case of composite reinforced with a mixture of 12.6% leaf fibers and (27.45 %) in the case of a composite reinforced with a nonwoven structure of 12.6 % of leaf fibers. Thus, to better evaluate the effect of the fiber origin, the reinforcing structure, the processing performed and the reinforcement factor on the performance of composite materials, a statistical study was performed using Minitab. Thus, ANOVA was used, and the patterns of the main effects of these parameters and interaction between them were established. Statistical analysis, the fiber treatment and reinforcement structure seem to be the most significant parameters.

Keywords: flexural properties, fiber treatment, structure and weight ratio, SEM photographs, Typha leaf and stem fibers

Procedia PDF Downloads 415
372 Mild Auditory Perception and Cognitive Impairment in mid-Trimester Pregnancy

Authors: Tahamina Begum, Wan Nor Azlen Wan Mohamad, Faruque Reza, Wan Rosilawati Wan Rosli

Abstract:

To assess auditory perception and cognitive function during pregnancy is necessary as the pregnant women need extra effort for attention mainly for their executive function to maintain their quality of life. This study aimed to investigate neural correlates of cognitive and behavioral processing during mid trimester pregnancy. Event-Related Potentials (ERPs) were studied by using 128-sensor net and PAS or COWA (controlled Oral Word Association), WCST (Wisconsin Card Sorting Test), RAVLTIM (Rey Auditory Verbal and Learning Test: immediate or interference recall, delayed recall (RAVLT DR) and total score (RAVLT TS) were tested for neuropsychology assessment. In total 18 subjects were recruited (n= 9 in each group; control and pregnant group). All participants of the pregnant group were within 16-27 (mid trimester) weeks gestation. Age and education matched control healthy subjects were recruited in the control group. Participants were given a standardized test of auditory cognitive function as auditory oddball paradigm during ERP study. In this paradigm, two different auditory stimuli (standard and target stimuli) were used where subjects counted silently only target stimuli with giving attention by ignoring standard stimuli. Mean differences between target and standard stimuli were compared across groups. N100 (auditory sensory ERP component) and P300 (auditory cognitive ERP component) were recorded at T3, T4, T5, T6, Cz and Pz electrode sites. An equal number of electrodes showed non-significantly shorter amplitude of N100 component (except significantly shorter at T3, P= 0.05) and non-significant longer latencies (except significantly longer latency at T5, P= 0.008) of N100 component in pregnant group comparing control. In case of P300 component, maximum electrode sites showed non-significantly higher amplitudes and equal number of sites showed non-significant shorter latencies in pregnant group comparing control. Neuropsychology results revealed the non-significant higher score of PAS, lower score of WCST, lower score of RAVLTIM and RAVLTDR in pregnant group comparing control. The results of N100 component and RAVLT scores concluded that auditory perception is mildly impaired and P300 component proved very mild cognitive dysfunction with good executive functions in second trimester of pregnancy.

Keywords: auditory perception, pregnancy, stimuli, trimester

Procedia PDF Downloads 384
371 Detection of Acrylamide Using Liquid Chromatography-Tandem Mass Spectrometry and Quantitative Risk Assessment in Selected Food from Saudi Market

Authors: Sarah A. Alotaibi, Mohammed A. Almutairi, Abdullah A. Alsayari, Adibah M. Almutairi, Somaiah K. Almubayedh

Abstract:

Concerns over the presence of acrylamide in food date back to 2002, when Swedish scientists stated that, in carbohydrate-rich foods, amounts of acrylamide were formed when cooked at high temperatures. Similar findings were reported by other researchers which, consequently, caused major international efforts to investigate dietary exposure and the subsequent health complications in order to properly manage this issue. Due to this issue, in this work, we aim to determine the acrylamide level in different foods (coffee, potato chips, biscuits, and baby food) commonly consumed by the Saudi population. In a total of forty-three samples, acrylamide was detected in twenty-three samples at levels of 12.3 to 2850 µg/kg. In reference to the food groups, the highest concentration of acrylamide was found in coffee samples (<12.3-2850 μg/kg), followed by potato chips (655-1310 μg/kg), then biscuits (23.5-449 μg/kg), whereas the lowest acrylamide level was observed in baby food (<14.75 – 126 μg/kg). Most coffee, biscuits and potato chips products contain high amount of acrylamide content and also the most commonly consumed product. Saudi adults had a mean exposure of acrylamide for coffee, potato, biscuit, and cereal (0.07439, 0.04794, 0.01125, 0.003371 µg/kg-b.w/day), respectively. On the other hand, exposure to acrylamide in Saudi infants and children to the same types of food was (0.1701, 0.1096, 0.02572, 0.00771 µg/kg-b.w/day), respectively. Most groups have a percentile that exceeds the tolerable daily intake (TDI) cancer value (2.6 µg/kg-b.w/day). Overall, the MOE results show that the Saudi population is at high risk of acrylamide-related disease in all food types, and there is a chance of cancer risk in all age groups (all values ˂10,000). Furthermore, it was found that in non-cancer risks, the acrylamide in all tested foods was within the safe limit (˃125), except for potato chips, in which there is a risk for diseases in the population. With potato and coffee as raw materials, additional studies were conducted to assess different factors, including temperature, cocking time, and additives affecting the acrylamide formation in fried potato and roasted coffee, by systematically varying processing temperatures and time values, a mitigation of acrylamide content was achieved when lowering the temperature and decreasing the cooking time. Furthermore, it was shown that the combination of the addition of chitosan and NaCl had a large impact on the formation.

Keywords: risk assessment, dietary exposure, MOA, acrylamide, hazard

Procedia PDF Downloads 58
370 Development of a Process Method to Manufacture Spreads from Powder Hardstock

Authors: Phakamani Xaba, Robert Huberts, Bilainu Oboirien

Abstract:

It has been over 200 years since margarine was discovered and manufactured using liquid oil, liquified hardstock oils and other oil phase & aqueous phase ingredients. Henry W. Bradley first used vegetable oils in liquid state and around 1871, since then; spreads have been traditionally manufactured using liquified oils. The main objective of this study was to develop a process method to produce spreads using spray dried hardstock fat powders as a structing fats in place of current liquid structuring fats. A high shear mixing system was used to condition the fat phase and the aqueous phase was prepared separately. Using a single scraped surface heat exchanger and pin stirrer, margarine was produced. The process method was developed for to produce spreads with 40%, 50% and 60% fat . The developed method was divided into three steps. In the first step, fat powders were conditioned by melting and dissolving them into liquid oils. The liquified portion of the oils were at 65 °C, whilst the spray dried fat powder was at 25 °C. The two were mixed using a mixing vessel at 900 rpm for 4 minutes. The rest of the ingredients i.e., lecithin, colorant, vitamins & flavours were added at ambient conditions to complete the fat/ oil phase. The water phase was prepared separately by mixing salt, water, preservative, acidifier in the mixing tank. Milk was also separately prepared by pasteurizing it at 79°C prior to feeding it into the aqueous phase. All the water phase contents were chilled to 8 °C. The oil phase and water phase were mixed in a tank, then fed into a single scraped surface heat exchanger. After the scraped surface heat exchanger, the emulsion was fed in a pin stirrer to work the formed crystals and produce margarine. The margarine produced using the developed process had fat levels of 40%, 50% and 60%. The margarine passed all the qualitative, stability, and taste assessments. The scores were 6/10, 7/10 & 7.5/10 for the 40%, 50% & 60% fat spreads, respectively. The success of the trials brought about differentiated knowledge on how to manufacture spreads using non micronized spray dried fat powders as hardstock. Manufacturers do not need to store structuring fats at 80-90°C and even high in winter, instead, they can adapt their processes to use fat powders which need to be stored at 25 °C. The developed process method used one scrape surface heat exchanger instead of the four to five currently used in votator based plants. The use of a single scraped surface heat exchanger translated to about 61% energy savings i.e., 23 kW per ton of product. Furthermore, it was found that the energy saved by implementing separate pasteurization was calculated to be 6.5 kW per ton of product produced.

Keywords: margarine emulsion, votator technology, margarine processing, scraped sur, fat powders

Procedia PDF Downloads 90
369 A Comparative Analysis of an All-Optical Switch Using Chalcogenide Glass and Gallium Arsenide Based on Nonlinear Photonic Crystal

Authors: Priyanka Kumari Gupta, Punya Prasanna Paltani, Shrivishal Tripathi

Abstract:

This paper proposes a nonlinear photonic crystal ring resonator-based all-optical 2 × 2 switch. The nonlinear Kerr effect is used to evaluate the essential 2 x 2 components of the photonic crystal-based optical switch, including the bar and cross states. The photonic crystal comprises a two-dimensional square lattice of dielectric rods in an air background. In the background air, two different dielectric materials are used for this comparison study separately. Initially with chalcogenide glass rods, then with GaAs rods. For both materials, the operating wavelength, bandgap diagram, operating power intensities, and performance parameters, such as the extinction ratio, insertion loss, and cross-talk of an optical switch, have also been estimated using the plane wave expansion and the finite-difference time-domain method. The chalcogenide glass material (Ag20As32Se48) has a high refractive index of 3.1 which is highly suitable for switching operations. This dielectric material is immersed in an air background with a nonlinear Kerr coefficient of 9.1 x 10-17 m2/W. The resonance wavelength is at 1552 nm, with the operating power intensities at the cross-state and bar state around 60 W/μm2 and 690 W/μm2. The extinction ratio, insertion loss, and cross-talk value for the chalcogenide glass at the cross-state are 17.19 dB, 0.051 dB, and -17.14 dB, and the bar state, the values are 11.32 dB, 0.025 dB, and -11.35 dB respectively. The gallium arsenide (GaAs) dielectric material has a high refractive index of 3.4, a direct bandgap semiconductor material highly preferred nowadays for switching operations. This dielectric material is immersed in an air background with a nonlinear Kerr coefficient of 3.1 x 10-16 m2/W. The resonance wavelength is at 1558 nm, with the operating power intensities at the cross-state and bar state around 110 W/μm2 and 200 W/μm2. The extinction ratio, insertion loss, and cross-talk value for the chalcogenide glass at the cross-state are found to be 3.36.19 dB, 2.436 dB, and -5.8 dB, and for the bar state, the values are 15.60 dB, 0.985 dB, and -16.59 dB respectively. This paper proposes an all-optical 2 × 2 switch based on a nonlinear photonic crystal using a ring resonator. The two-dimensional photonic crystal comprises a square lattice of dielectric rods in an air background. The resonance wavelength is in the range of photonic bandgap. Later, another widely used material, GaAs, is also considered, and its performance is compared with the chalcogenide glass. Our presented structure can be potentially applicable in optical integration circuits and information processing.

Keywords: photonic crystal, FDTD, ring resonator, optical switch

Procedia PDF Downloads 77
368 Virtual Team Performance: A Transactive Memory System Perspective

Authors: Belbaly Nassim

Abstract:

Virtual teams (VT) initiatives, in which teams are geographically dispersed and communicate via modern computer-driven technologies, have attracted increasing attention from researchers and professionals. The growing need to examine how to balance and optimize VT is particularly important given the exposure experienced by companies when their employees encounter globalization and decentralization pressures to monitor VT performance. Hence, organization is regularly limited due to misalignment between the behavioral capabilities of the team’s dispersed competences and knowledge capabilities and how trust issues interplay and influence these VT dimensions and the effects of such exchanges. In fact, the future success of business depends on the extent to which VTs are managing efficiently their dispersed expertise, skills and knowledge to stimulate VT creativity. Transactive memory system (TMS) may enhance VT creativity using its three dimensons: knowledge specialization, credibility and knowledge coordination. TMS can be understood as a composition of both a structural component residing of individual knowledge and a set of communication processes among individuals. The individual knowledge is shared while being retrieved, applied and the learning is coordinated. TMS is driven by the central concept that the system is built on the distinction between internal and external memory encoding. A VT learns something new and catalogs it in memory for future retrieval and use. TMS uses the role of information technology to explain VT behaviors by offering VT members the possibility to encode, store, and retrieve information. TMS considers the members of a team as a processing system in which the location of expertise both enhances knowledge coordination and builds trust among members over time. We build on TMS dimensions to hypothesize the effects of specialization, coordination, and credibility on VT creativity. In fact, VTs consist of dispersed expertise, skills and knowledge that can positively enhance coordination and collaboration. Ultimately, this team composition may lead to recognition of both who has expertise and where that expertise is located; over time, the team composition may also build trust among VT members over time developing the ability to coordinate their knowledge which can stimulate creativity. We also assess the reciprocal relationship between TMS dimensions and VT creativity. We wish to use TMS to provide researchers with a theoretically driven model that is empirically validated through survey evidence. We propose that TMS provides a new way to enhance and balance VT creativity. This study also provides researchers insight into the use of TMS to influence positively VT creativity. In addition to our research contributions, we provide several managerial insights into how TMS components can be used to increase performance within dispersed VTs.

Keywords: virtual team creativity, transactive memory systems, specialization, credibility, coordination

Procedia PDF Downloads 172
367 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources

Authors: Mustafa Alhamdi

Abstract:

Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.

Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification

Procedia PDF Downloads 150
366 Study of Biofouling Wastewater Treatment Technology

Authors: Sangho Park, Mansoo Kim, Kyujung Chae, Junhyuk Yang

Abstract:

The International Maritime Organization (IMO) recognized the problem of invasive species invasion and adopted the "International Convention for the Control and Management of Ships' Ballast Water and Sediments" in 2004, which came into force on September 8, 2017. In 2011, the IMO approved the "Guidelines for the Control and Management of Ships' Biofouling to Minimize the Transfer of Invasive Aquatic Species" to minimize the movement of invasive species by hull-attached organisms and required ships to manage the organisms attached to their hulls. Invasive species enter new environments through ships' ballast water and hull attachment. However, several obstacles to implementing these guidelines have been identified, including a lack of underwater cleaning equipment, regulations on underwater cleaning activities in ports, and difficulty accessing crevices in underwater areas. The shipping industry, which is the party responsible for understanding these guidelines, wants to implement them for fuel cost savings resulting from the removal of organisms attached to the hull, but they anticipate significant difficulties in implementing the guidelines due to the obstacles mentioned above. Robots or people remove the organisms attached to the hull underwater, and the resulting wastewater includes various species of organisms and particles of paint and other pollutants. Currently, there is no technology available to sterilize the organisms in the wastewater or stabilize the heavy metals in the paint particles. In this study, we aim to analyze the characteristics of the wastewater generated from the removal of hull-attached organisms and select the optimal treatment technology. The organisms in the wastewater generated from the removal of the attached organisms meet the biological treatment standard (D-2) using the sterilization technology applied in the ships' ballast water treatment system. The heavy metals and other pollutants in the paint particles generated during removal are treated using stabilization technologies such as thermal decomposition. The wastewater generated is treated using a two-step process: 1) development of sterilization technology through pretreatment filtration equipment and electrolytic sterilization treatment and 2) development of technology for removing particle pollutants such as heavy metals and dissolved inorganic substances. Through this study, we will develop a biological removal technology and an environmentally friendly processing system for the waste generated after removal that meets the requirements of the government and the shipping industry and lays the groundwork for future treatment standards.

Keywords: biofouling, ballast water treatment system, filtration, sterilization, wastewater

Procedia PDF Downloads 109
365 Development of a Table-Top Composite Wire Fabrication System for Additive Manufacturing

Authors: Krishna Nand, Mohammad Taufik

Abstract:

Fused Filament Fabrication (FFF) is one of the most popular additive manufacturing (AM) technology. In FFF technology, a wire form material (filament) is fed inside a heated chamber, where it gets converted into semi-solid form and extruded out of a nozzle to be deposited on the build platform to fabricate the part. FFF technology is expanding and covering the market at a very rapid rate, so the need of raw materials for 3D printing is also increasing. The cost of 3D printing is directly affected by filament cost. To make 3D printing more economic, a compact and portable filament/wire extrusion system is needed. Wire extrusion systems to extrude ordinary wire/filament made of a single material are available in the market. However, extrusion system to make a composite wire/filament are not available. Hence, in this study, initial efforts have been made to develop a table-top composite wire extruder. The developed system is consisted of mechanical parts, electronics parts, and a control system. A multiple channel hopper, extrusion screw, melting chamber and nozzle, cooling zone, and spool winder are some mechanical parts. While motors, heater, temperature sensor, cooling fans are some electronics parts, which are used to develop this system. A control board has been used to control the various process parameters like – temperature and speed of motors. For the production of composite wire/filament, two different materials could be fed through two channels of hopper, which will be mixed and carried to the heated zone by extrusion screw. The extrusion screw is rotated by a motor, and the speed of this motor will be controlled by the controller as per the requirement of material extrusion rate. In the heated zone, the material will melt with the help of a heating element and extruded out of the nozzle in the form of wire. The developed system occupies less floor space due to the vertical orientation of its heating chamber. It is capable to extrude ordinary filament as well as composite filament, which are compatible with 3D printers available in the market. Further, the developed system could be employed in the research and development of materials, processing, and characterization for 3D printer. The developed system presented in this study could be a better choice for hobbyists and researchers dealing with the fused filament fabrication process to reduce the 3D printing cost significantly by recycling the waste material into 3D printer feed material. Further, it could also be explored as a better alternative for filament production at the commercial level.

Keywords: additive manufacturing, 3D Printing, filament extrusion, pellet extrusion

Procedia PDF Downloads 168
364 Job Resource, Personal Resource, Engagement and Performance with Balanced Score Card in the Integrated Textile Companies in Indonesia

Authors: Nurlaila Effendy

Abstract:

Companies in Asia face a number of constraints in tight competitiveness in ASEAN Economic Community 2015 and globalization. An economic capitalism system as an integral part of globalization processing brings broad impacts. They need to improve business performance in globalization and ASEAN Economic Community. Organizational development has quite clearly demonstrated that aligning individual’s personal goals with the goals of the organization translates into measurable and sustained performance improvement. Human capital is a key to achieve company performance. Employee Engagement (EE) creates and expresses themselves physically, cognitively and emotionally to achieve company goals and individual goals. One will experience a total involvement when they undertake their jobs and feel a self integration to their job and organization. A leader plays key role in attaining the goals and objectives of a company/organization. Any Manager in a company needs to have leadership competence and global mindset. As one the of positive organizational behavior developments, psychological capital (PsyCap) is assumed to be one of the most important capitals in the global mindset, in addition to intellectual capital and social capital. Textile companies also need to face a number of constraints in tight competitiveness in regional and global. This research involved 42 managers in two textiles and a spinning companies in a group, in Central Java, Indonesia. It is a quantitative research with Partial Least Squares (PLS) studying job resource (Social Support & Organizational Climate) and Personal Resource (4 dimensions of Psychological Capital & Leadership Competence) as prediction of Employee Engagement, also Employee Engagement and leadership competence as prediction of leader’s performance. The performance of a leader is measured by means of achievement on objective strategies in terms of 4 perspectives (financial and non-financial perspectives) in a Balanced Score Card (BSC). It took one year during a business plan of year 2014, from January to December 2014. The result of this research is there is correlation between Job Resource (coefficient value of Social Support is 0.036 & coefficient value of organizational climate is 0.220) and Personal Resource (coefficient value of PsyCap is 0.513 & coefficient value of Leadership Competence is 0.249) with employee engagement. There is correlation between employee engagement (coefficient value is 0.279) and leadership competence (coefficient value is 0.581) with performance.

Keywords: organizational climate, social support, psychological capital leadership competence, employee engagement, performance, integrated textile companies

Procedia PDF Downloads 433
363 Effect of Low Calorie Sweeteners on Chemical, Sensory Evaluation and Antidiabetic of Pumpkin Jam Fortified with Soybean

Authors: Amnah M. A. Alsuhaibani, Amal N. Al-Kuraieef

Abstract:

Introduction: In the recent decades, production of low-calorie jams is needed for diabetics that comprise low calorie fruits and low calorie sweeteners. Object: the research aimed to prepare low calorie formulated pumpkin jams (fructose, stevia and aspartame) incorporated with soy bean and evaluate the jams through chemical analysis and sensory evaluation after storage for six month. Moreover, the possible effect of consumption of low calorie jams on diabetic rats was investigated. Methods: Five formulas of pumpkin jam with different sucrose, fructose, stevia and aspartame sweeteners and soy bean were prepared and stored at 10 oC for six month compared to ordinary pumpkin jam. Chemical composition and sensory evaluation of formulated jams were evaluated at zero time, 3 month and 6 month of storage. The best three acceptable pumpkin jams were taken for biological study on diabetic rats. Rats divided into group (1) served as negative control and streptozotocin induce diabetes four rat groups that were positive diabetic control (group2), rats fed on standard diet with 10% sucrose soybean jam, fructose soybean jam and stevia soybean jam (group 3, 4&5), respectively. Results: The content of protein, fat, ash and fiber were increased but carbohydrate was decreased in low calorie formulated pumpkin jams compared to ordinary jam. Production of aspartame soybean pumpkin jam had lower score of all sensory attributes compared to other jam then followed by stevia soybean Pumpkin jam. Using non nutritive sweeteners (stevia & aspartame) with soybean in processing jam could lower the score of the sensory attributes after storage for 3 and 6 months. The highest score was recorded for sucrose and fructose soybean jams followed by stevia soybean jam while aspartame soybean jam recorded the lowest score significantly. The biological evaluation showed a significant improvement in body weight and FER of rats after six weeks of consumption of standard diet with jams (Group 3,4&5) compared to Group1. Rats consumed 10% low calorie jam with nutrient sweetener (fructose) and non nutrient sweetener (stevia) soybean jam (group 4& 5) showed significant decrease in glucose level, liver function enzymes activity, and liver cholesterol & total lipids in addition of significant increase of insulin and glycogen compared to the levels of group 2. Conclusion: low calorie pumpkin jams can be prepared by low calorie sweeteners and soybean and also storage for 3 months at 10oC without change sensory attributes. Consumption of stevia pumpkin jam fortified with soybean had positive health effects on streptozoticin induced diabetes in rats.

Keywords: pumpkin jam, HFCS, aspartame, stevia, storage

Procedia PDF Downloads 183
362 Identification of Damage Mechanisms in Interlock Reinforced Composites Using a Pattern Recognition Approach of Acoustic Emission Data

Authors: M. Kharrat, G. Moreau, Z. Aboura

Abstract:

The latest advances in the weaving industry, combined with increasingly sophisticated means of materials processing, have made it possible to produce complex 3D composite structures. Mainly used in aeronautics, composite materials with 3D architecture offer better mechanical properties than 2D reinforced composites. Nevertheless, these materials require a good understanding of their behavior. Because of the complexity of such materials, the damage mechanisms are multiple, and the scenario of their appearance and evolution depends on the nature of the exerted solicitations. The AE technique is a well-established tool for discriminating between the damage mechanisms. Suitable sensors are used during the mechanical test to monitor the structural health of the material. Relevant AE-features are then extracted from the recorded signals, followed by a data analysis using pattern recognition techniques. In order to better understand the damage scenarios of interlock composite materials, a multi-instrumentation was set-up in this work for tracking damage initiation and development, especially in the vicinity of the first significant damage, called macro-damage. The deployed instrumentation includes video-microscopy, Digital Image Correlation, Acoustic Emission (AE) and micro-tomography. In this study, a multi-variable AE data analysis approach was developed for the discrimination between the different signal classes representing the different emission sources during testing. An unsupervised classification technique was adopted to perform AE data clustering without a priori knowledge. The multi-instrumentation and the clustered data served to label the different signal families and to build a learning database. This latter is useful to construct a supervised classifier that can be used for automatic recognition of the AE signals. Several materials with different ingredients were tested under various solicitations in order to feed and enrich the learning database. The methodology presented in this work was useful to refine the damage threshold for the new generation materials. The damage mechanisms around this threshold were highlighted. The obtained signal classes were assigned to the different mechanisms. The isolation of a 'noise' class makes it possible to discriminate between the signals emitted by damages without resorting to spatial filtering or increasing the AE detection threshold. The approach was validated on different material configurations. For the same material and the same type of solicitation, the identified classes are reproducible and little disturbed. The supervised classifier constructed based on the learning database was able to predict the labels of the classified signals.

Keywords: acoustic emission, classifier, damage mechanisms, first damage threshold, interlock composite materials, pattern recognition

Procedia PDF Downloads 155
361 Radiomics: Approach to Enable Early Diagnosis of Non-Specific Breast Nodules in Contrast-Enhanced Magnetic Resonance Imaging

Authors: N. D'Amico, E. Grossi, B. Colombo, F. Rigiroli, M. Buscema, D. Fazzini, G. Cornalba, S. Papa

Abstract:

Purpose: To characterize, through a radiomic approach, the nature of nodules considered non-specific by expert radiologists, recognized in magnetic resonance mammography (MRm) with T1-weighted (T1w) sequences with paramagnetic contrast. Material and Methods: 47 cases out of 1200 undergoing MRm, in which the MRm assessment gave uncertain classification (non-specific nodules), were admitted to the study. The clinical outcome of the non-specific nodules was later found through follow-up or further exams (biopsy), finding 35 benign and 12 malignant. All MR Images were acquired at 1.5T, a first basal T1w sequence and then four T1w acquisitions after the paramagnetic contrast injection. After a manual segmentation of the lesions, done by a radiologist, and the extraction of 150 radiomic features (30 features per 5 subsequent times) a machine learning (ML) approach was used. An evolutionary algorithm (TWIST system based on KNN algorithm) was used to subdivide the dataset into training and validation test and to select features yielding the maximal amount of information. After this pre-processing, different machine learning systems were applied to develop a predictive model based on a training-testing crossover procedure. 10 cases with a benign nodule (follow-up older than 5 years) and 18 with an evident malignant tumor (clear malignant histological exam) were added to the dataset in order to allow the ML system to better learn from data. Results: NaiveBayes algorithm working on 79 features selected by a TWIST system, resulted to be the best performing ML system with a sensitivity of 96% and a specificity of 78% and a global accuracy of 87% (average values of two training-testing procedures ab-ba). The results showed that in the subset of 47 non-specific nodules, the algorithm predicted the outcome of 45 nodules which an expert radiologist could not identify. Conclusion: In this pilot study we identified a radiomic approach allowing ML systems to perform well in the diagnosis of a non-specific nodule at MR mammography. This algorithm could be a great support for the early diagnosis of malignant breast tumor, in the event the radiologist is not able to identify the kind of lesion and reduces the necessity for long follow-up. Clinical Relevance: This machine learning algorithm could be essential to support the radiologist in early diagnosis of non-specific nodules, in order to avoid strenuous follow-up and painful biopsy for the patient.

Keywords: breast, machine learning, MRI, radiomics

Procedia PDF Downloads 267
360 Chromium (VI) Removal from Aqueous Solutions by Ion Exchange Processing Using Eichrom 1-X4, Lewatit Monoplus M800 and Lewatit A8071 Resins: Batch Ion Exchange Modeling

Authors: Havva Tutar Kahraman, Erol Pehlivan

Abstract:

In recent years, environmental pollution by wastewater rises very critically. Effluents discharged from various industries cause this challenge. Different type of pollutants such as organic compounds, oxyanions, and heavy metal ions create this threat for human bodies and all other living things. However, heavy metals are considered one of the main pollutant groups of wastewater. Therefore, this case creates a great need to apply and enhance the water treatment technologies. Among adopted treatment technologies, adsorption process is one of the methods, which is gaining more and more attention because of its easy operations, the simplicity of design and versatility. Ion exchange process is one of the preferred methods for removal of heavy metal ions from aqueous solutions. It has found widespread application in water remediation technologies, during the past several decades. Therefore, the purpose of this study is to the removal of hexavalent chromium, Cr(VI), from aqueous solutions. Cr(VI) is considered as a well-known highly toxic metal which modifies the DNA transcription process and causes important chromosomic aberrations. The treatment and removal of this heavy metal have received great attention to maintaining its allowed legal standards. The purpose of the present paper is an attempt to investigate some aspects of the use of three anion exchange resins: Eichrom 1-X4, Lewatit Monoplus M800 and Lewatit A8071. Batch adsorption experiments were carried out to evaluate the adsorption capacity of these three commercial resins in the removal of Cr(VI) from aqueous solutions. The chromium solutions used in the experiments were synthetic solutions. The parameters that affect the adsorption, solution pH, adsorbent concentration, contact time, and initial Cr(VI) concentration, were performed at room temperature. High adsorption rates of metal ions for the three resins were reported at the onset, and then plateau values were gradually reached within 60 min. The optimum pH for Cr(VI) adsorption was found as 3.0 for these three resins. The adsorption decreases with the increase in pH for three anion exchangers. The suitability of Freundlich, Langmuir and Scatchard models were investigated for Cr(VI)-resin equilibrium. Results, obtained in this study, demonstrate excellent comparability between three anion exchange resins indicating that Eichrom 1-X4 is more effective and showing highest adsorption capacity for the removal of Cr(VI) ions. Investigated anion exchange resins in this study can be used for the efficient removal of chromium from water and wastewater.

Keywords: adsorption, anion exchange resin, chromium, kinetics

Procedia PDF Downloads 260