Search results for: uranium processing
403 The Outcome of Using Machine Learning in Medical Imaging
Authors: Adel Edwar Waheeb Louka
Abstract:
Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.Keywords: artificial intelligence, convolutional neural networks, deeplearning, image processing, machine learningSarapin, intraarticular, chronic knee pain, osteoarthritisFNS, trauma, hip, neck femur fracture, minimally invasive surgery
Procedia PDF Downloads 73402 Prevalence of Breast Cancer Molecular Subtypes at a Tertiary Cancer Institute
Authors: Nahush Modak, Meena Pangarkar, Anand Pathak, Ankita Tamhane
Abstract:
Background: Breast cancer is the prominent cause of cancer and mortality among women. This study was done to show the statistical analysis of a cohort of over 250 patients detected with breast cancer diagnosed by oncologists using Immunohistochemistry (IHC). IHC was performed by using ER; PR; HER2; Ki-67 antibodies. Materials and methods: Formalin fixed Paraffin embedded tissue samples were obtained by surgical manner and standard protocol was followed for fixation, grossing, tissue processing, embedding, cutting and IHC. The Ventana Benchmark XT machine was used for automated IHC of the samples. Antibodies used were supplied by F. Hoffmann-La Roche Ltd. Statistical analysis was performed by using SPSS for windows. Statistical tests performed were chi-squared test and Correlation tests with p<.01. The raw data was collected and provided by National Cancer Insitute, Jamtha, India. Result: Luminal B was the most prevailing molecular subtype of Breast cancer at our institute. Chi squared test of homogeneity was performed to find equality in distribution and Luminal B was the most prevalent molecular subtype. The worse prognostic indicator for breast cancer depends upon expression of Ki-67 and her2 protein in cancerous cells. Our study was done at p <.01 and significant dependence was observed. There exists no dependence of age on molecular subtype of breast cancer. Similarly, age is an independent variable while considering Ki-67 expression. Chi square test performed on Human epidermal growth factor receptor 2 (HER2) statuses of patients and strong dependence was observed in percentage of Ki-67 expression and Her2 (+/-) character which shows that, value of Ki depends upon Her2 expression in cancerous cells (p<.01). Surprisingly, dependence was observed in case of Ki-67 and Pr, at p <.01. This shows that Progesterone receptor proteins (PR) are over-expressed when there is an elevation in expression of Ki-67 protein. Conclusion: We conclude from that Luminal B is the most prevalent molecular subtype at National Cancer Institute, Jamtha, India. There was found no significant correlation between age and Ki-67 expression in any molecular subtype. And no dependence or correlation exists between patients’ age and molecular subtype. We also found that, when the diagnosis is Luminal A, out of the cohort of 257 patients, no patient shows >14% Ki-67 value. Statistically, extremely significant values were observed for dependence of PR+Her2- and PR-Her2+ scores on Ki-67 expression. (p<.01). Her2 is an important prognostic factor in breast cancer. Chi squared test for Her2 and Ki-67 shows that the expression of Ki depends upon Her2 statuses. Moreover, Ki-67 cannot be used as a standalone prognostic factor for determining breast cancer.Keywords: breast cancer molecular subtypes , correlation, immunohistochemistry, Ki-67 and HR, statistical analysis
Procedia PDF Downloads 123401 Mechanical Characterization and CNC Rotary Ultrasonic Grinding of Crystal Glass
Authors: Ricardo Torcato, Helder Morais
Abstract:
The manufacture of crystal glass parts is based on obtaining the rough geometry by blowing and/or injection, generally followed by a set of manual finishing operations using cutting and grinding tools. The forming techniques used do not allow the obtainment, with repeatability, of parts with complex shapes and the finishing operations use intensive specialized labor resulting in high cycle times and production costs. This work aims to explore the digital manufacture of crystal glass parts by investigating new subtractive techniques for the automated, flexible finishing of these parts. Finishing operations are essential to respond to customer demands in terms of crystal feel and shine. It is intended to investigate the applicability of different computerized finishing technologies, namely milling and grinding in a CNC machining center with or without ultrasonic assistance, to crystal processing. Research in the field of grinding hard and brittle materials, despite not being extensive, has increased in recent years, and scientific knowledge about the machinability of crystal glass is still very limited. However, it can be said that the unique properties of glass, such as high hardness and very low toughness, make any glass machining technology a very challenging process. This work will measure the performance improvement brought about by the use of ultrasound compared to conventional crystal grinding. This presentation is focused on the mechanical characterization and analysis of the cutting forces in CNC machining of superior crystal glass (Pb ≥ 30%). For the mechanical characterization, the Vickers hardness test provides an estimate of the material hardness (Hv) and the fracture toughness based on cracks that appear in the indentation. Mechanical impulse excitation test estimates the Young’s Modulus, shear modulus and Poisson ratio of the material. For the cutting forces, it a dynamometer was used to measure the forces in the face grinding process. The tests were made based on the Taguchi method to correlate the input parameters (feed rate, tool rotation speed and depth of cut) with the output parameters (surface roughness and cutting forces) to optimize the process (better roughness using the cutting forces that do not compromise the material structure and the tool life) using ANOVA. This study was conducted for conventional grinding and for the ultrasonic grinding process with the same cutting tools. It was possible to determine the optimum cutting parameters for minimum cutting forces and for minimum surface roughness in both grinding processes. Ultrasonic-assisted grinding provides a better surface roughness than conventional grinding.Keywords: CNC machining, crystal glass, cutting forces, hardness
Procedia PDF Downloads 153400 Rapid Soil Classification Using Computer Vision, Electrical Resistivity and Soil Strength
Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, Lionel L. J. Ang, Algernon C. S. Hong, Danette S. E. Tan, Grace H. B. Foo, K. Q. Hong, L. M. Cheng, M. L. Leong
Abstract:
This paper presents a novel rapid soil classification technique that combines computer vision with four-probe soil electrical resistivity method and cone penetration test (CPT), to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from local construction projects are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labour-intensive. Thus, a rapid classification method is needed at the SGs. Computer vision, four-probe soil electrical resistivity and CPT were combined into an innovative non-destructive and instantaneous classification method for this purpose. The computer vision technique comprises soil image acquisition using industrial grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). Complementing the computer vision technique, the apparent electrical resistivity of soil (ρ) is measured using a set of four probes arranged in Wenner’s array. It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the soil strength is measured using a modified mini cone penetrometer, and w is measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay” and an even mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay”. It is also found that these parameters can be integrated with the computer vision technique on-site to complete the rapid soil classification in less than three minutes.Keywords: Computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification
Procedia PDF Downloads 218399 The Significance of Picture Mining in the Fashion and Design as a New Research Method
Authors: Katsue Edo, Yu Hiroi
Abstract:
T Increasing attention has been paid to using pictures and photographs in research since the beginning of the 21th century in social sciences. Meanwhile we have been studying the usefulness of Picture mining, which is one of the new ways for a these picture using researches. Picture Mining is an explorative research analysis method that takes useful information from pictures, photographs and static or moving images. It is often compared with the methods of text mining. The Picture Mining concept includes observational research in the broad sense, because it also aims to analyze moving images (Ochihara and Edo 2013). In the recent literature, studies and reports using pictures are increasing due to the environmental changes. These are identified as technological and social changes (Edo et.al. 2013). Low price digital cameras and i-phones, high information transmission speed, low costs for information transferring and high performance and resolution of the cameras of mobile phones have changed the photographing behavior of people. Consequently, there is less resistance in taking and processing photographs for most of the people in the developing countries. In these studies, this method of collecting data from respondents is often called as ‘participant-generated photography’ or ‘respondent-generated visual imagery’, which focuses on the collection of data and its analysis (Pauwels 2011, Snyder 2012). But there are few systematical and conceptual studies that supports it significance of these methods. We have discussed in the recent years to conceptualize these picture using research methods and formalize theoretical findings (Edo et. al. 2014). We have identified the most efficient fields of Picture mining in the following areas inductively and in case studies; 1) Research in Consumer and Customer Lifestyles. 2) New Product Development. 3) Research in Fashion and Design. Though we have found that it will be useful in these fields and areas, we must verify these assumptions. In this study we will focus on the field of fashion and design, to determine whether picture mining methods are really reliable in this area. In order to do so we have conducted an empirical research of the respondents’ attitudes and behavior concerning pictures and photographs. We compared the attitudes and behavior of pictures toward fashion to meals, and found out that taking pictures of fashion is not as easy as taking meals and food. Respondents do not often take pictures of fashion and upload their pictures online, such as Facebook and Instagram, compared to meals and food because of the difficulty of taking them. We concluded that we should be more careful in analyzing pictures in the fashion area for there still might be some kind of bias existing even if the environment of pictures have drastically changed in these years.Keywords: empirical research, fashion and design, Picture Mining, qualitative research
Procedia PDF Downloads 363398 Tumor Size and Lymph Node Metastasis Detection in Colon Cancer Patients Using MR Images
Authors: Mohammadreza Hedyehzadeh, Mahdi Yousefi
Abstract:
Colon cancer is one of the most common cancer, which predicted to increase its prevalence due to the bad eating habits of peoples. Nowadays, due to the busyness of people, the use of fast foods is increasing, and therefore, diagnosis of this disease and its treatment are of particular importance. To determine the best treatment approach for each specific colon cancer patients, the oncologist should be known the stage of the tumor. The most common method to determine the tumor stage is TNM staging system. In this system, M indicates the presence of metastasis, N indicates the extent of spread to the lymph nodes, and T indicates the size of the tumor. It is clear that in order to determine all three of these parameters, an imaging method must be used, and the gold standard imaging protocols for this purpose are CT and PET/CT. In CT imaging, due to the use of X-rays, the risk of cancer and the absorbed dose of the patient is high, while in the PET/CT method, there is a lack of access to the device due to its high cost. Therefore, in this study, we aimed to estimate the tumor size and the extent of its spread to the lymph nodes using MR images. More than 1300 MR images collected from the TCIA portal, and in the first step (pre-processing), histogram equalization to improve image qualities and resizing to get the same image size was done. Two expert radiologists, which work more than 21 years on colon cancer cases, segmented the images and extracted the tumor region from the images. The next step is feature extraction from segmented images and then classify the data into three classes: T0N0، T3N1 و T3N2. In this article, the VGG-16 convolutional neural network has been used to perform both of the above-mentioned tasks, i.e., feature extraction and classification. This network has 13 convolution layers for feature extraction and three fully connected layers with the softmax activation function for classification. In order to validate the proposed method, the 10-fold cross validation method used in such a way that the data was randomly divided into three parts: training (70% of data), validation (10% of data) and the rest for testing. It is repeated 10 times, each time, the accuracy, sensitivity and specificity of the model are calculated and the average of ten repetitions is reported as the result. The accuracy, specificity and sensitivity of the proposed method for testing dataset was 89/09%, 95/8% and 96/4%. Compared to previous studies, using a safe imaging technique (MRI) and non-use of predefined hand-crafted imaging features to determine the stage of colon cancer patients are some of the study advantages.Keywords: colon cancer, VGG-16, magnetic resonance imaging, tumor size, lymph node metastasis
Procedia PDF Downloads 59397 Social Skills as a Significant Aspect of a Successful Start of Compulsory Education
Authors: Eva Šmelová, Alena Berčíková
Abstract:
The issue of school maturity and readiness of a child for a successful start of compulsory education is one of the long-term monitored areas, especially in the context of education and psychology. In the context of the curricular reform in the Czech Republic, the issue has recently gained importance. Analyses of research in this area suggest a lack of a broader overview of indicators informing about the current level of children’s school maturity and school readiness. Instead, various studies address partial issues. Between 2009 and 2013 a research study was performed at the Faculty of Education, Palacký University Olomouc (Czech Republic) focusing on children’s maturity and readiness for compulsory education. In this study, social skills were of marginal interest; the main focus was on the mental area. This previous research is smoothly linked with the present study, the objective of which is to identify the level of school maturity and school readiness in selected characteristics of social skills as part of the adaptation process after enrolment in compulsory education. In this context, the following research question has been formulated: During the process of adaptation to the school environment, which social skills are weakened? The method applied was observation, for the purposes of which the authors developed a research tool – record sheet with 11 items – social skills that a child should have by the end of preschool education. The items were assessed by first-grade teachers at the beginning of the school year. The degree of achievement and intensity of the skills were assessed for each child using an assessment scale. In the research, the authors monitored a total of three independent variables (gender, postponement of school attendance, participation in inclusive education). The effect of these independent variables was monitored using 11 dependent variables. These variables are represented by the results achieved in selected social skills. Statistical data processing was assisted by the Computer Centre of Palacký University Olomouc. Statistical calculations were performed using SPSS v. 12.0 for Windows and STATISTICA: StatSoft STATISTICA CR, Cz (software system for data analysis). The research sample comprised 115 children. In their paper, the authors present the results of the research and at the same time point to possible areas of further investigation. They also highlight possible risks associated with weakened social skills.Keywords: compulsory education, curricular reform, educational diagnostics, pupil, school curriculum, school maturity, school readiness, social skills
Procedia PDF Downloads 251396 A Method to Predict the Thermo-Elastic Behavior of Laser-Integrated Machine Tools
Authors: C. Brecher, M. Fey, F. Du Bois-Reymond, S. Neus
Abstract:
Additive manufacturing has emerged into a fast-growing section within the manufacturing technologies. Established machine tool manufacturers, such as DMG MORI, recently presented machine tools combining milling and laser welding. By this, machine tools can realize a higher degree of flexibility and a shorter production time. Still there are challenges that have to be accounted for in terms of maintaining the necessary machining accuracy - especially due to thermal effects arising through the use of high power laser processing units. To study the thermal behavior of laser-integrated machine tools, it is essential to analyze and simulate the thermal behavior of machine components, individual and assembled. This information will help to design a geometrically stable machine tool under the influence of high power laser processes. This paper presents an approach to decrease the loss of machining precision due to thermal impacts. Real effects of laser machining processes are considered and thus enable an optimized design of the machine tool, respective its components, in the early design phase. Core element of this approach is a matched FEM model considering all relevant variables arising, e.g. laser power, angle of laser beam, reflective coefficients and heat transfer coefficient. Hence, a systematic approach to obtain this matched FEM model is essential. Indicating the thermal behavior of structural components as well as predicting the laser beam path, to determine the relevant beam intensity on the structural components, there are the two constituent aspects of the method. To match the model both aspects of the method have to be combined and verified empirically. In this context, an essential machine component of a five axis machine tool, the turn-swivel table, serves as the demonstration object for the verification process. Therefore, a turn-swivel table test bench as well as an experimental set-up to measure the beam propagation were developed and are described in the paper. In addition to the empirical investigation, a simulative approach of the described types of experimental examination is presented. Concluding, it is shown that the method and a good understanding of the two core aspects, the thermo-elastic machine behavior and the laser beam path, as well as their combination helps designers to minimize the loss of precision in the early stages of the design phase.Keywords: additive manufacturing, laser beam machining, machine tool, thermal effects
Procedia PDF Downloads 264395 Historical Development of Negative Emotive Intensifiers in Hungarian
Authors: Martina Katalin Szabó, Bernadett Lipóczi, Csenge Guba, István Uveges
Abstract:
In this study, an exhaustive analysis was carried out about the historical development of negative emotive intensifiers in the Hungarian language via NLP methods. Intensifiers are linguistic elements which modify or reinforce a variable character in the lexical unit they apply to. Therefore, intensifiers appear with other lexical items, such as adverbs, adjectives, verbs, infrequently with nouns. Due to the complexity of this phenomenon (set of sociolinguistic, semantic, and historical aspects), there are many lexical items which can operate as intensifiers. The group of intensifiers are admittedly one of the most rapidly changing elements in the language. From a linguistic point of view, particularly interesting are a special group of intensifiers, the so-called negative emotive intensifiers, that, on their own, without context, have semantic content that can be associated with negative emotion, but in particular cases, they may function as intensifiers (e.g.borzasztóanjó ’awfully good’, which means ’excellent’). Despite their special semantic features, negative emotive intensifiers are scarcely examined in literature based on large Historical corpora via NLP methods. In order to become better acquainted with trends over time concerning the intensifiers, The exhaustively analysed a specific historical corpus, namely the Magyar TörténetiSzövegtár (Hungarian Historical Corpus). This corpus (containing 3 millions text words) is a collection of texts of various genres and styles, produced between 1772 and 2010. Since the corpus consists of raw texts and does not contain any additional information about the language features of the data (such as stemming or morphological analysis), a large amount of manual work was required to process the data. Thus, based on a lexicon of negative emotive intensifiers compiled in a previous phase of the research, every occurrence of each intensifier was queried, and the results were stored in a separate data frame. Then, basic linguistic processing (POS-tagging, lemmatization etc.) was carried out automatically with the ‘magyarlanc’ NLP-toolkit. Finally, the frequency and collocation features of all the negative emotive words were automatically analyzed in the corpus. Outcomes of the research revealed in detail how these words have proceeded through grammaticalization over time, i.e., they change from lexical elements to grammatical ones, and they slowly go through a delexicalization process (their negative content diminishes over time). What is more, it was also pointed out which negative emotive intensifiers are at the same stage in this process in the same time period. Giving a closer look to the different domains of the analysed corpus, it also became certain that during this process, the pragmatic role’s importance increases: the newer use expresses the speaker's subjective, evaluative opinion at a certain level.Keywords: historical corpus analysis, historical linguistics, negative emotive intensifiers, semantic changes over time
Procedia PDF Downloads 233394 Reflective Thinking and Experiential Learning – A Quasi-Experimental Quanti-Quali Response to Greater Diversification of Activities, Greater Integration of Student Profiles
Authors: Paulo Sérgio Ribeiro de Araújo Bogas
Abstract:
Although several studies have assumed (at least implicitly) that learners' approaches to learning develop into deeper approaches to higher education, there appears to be no clear theoretical basis for this assumption and no empirical evidence. As a scientific contribution to this discussion, a pedagogical intervention of a quasi-experimental nature was developed, with a mixed methodology, evaluating the intervention within a single curricular unit of Marketing, using cases based on real challenges of brands, business simulation, and customer projects. Primary and secondary experiences were incorporated in the intervention: the primary experiences are the experiential activities themselves; the secondary experiences result from the primary experience, such as reflection and discussion in work teams. A diversified learning relationship was encouraged through the various connections between the different members of the learning community. The present study concludes that in the same context, the student's responses can be described as students who reinforce the initial deep approach, students who maintain the initial deep approach level, and others who change from an emphasis on the deep approach to one closer to superficial. This typology did not always confirm studies reported in the literature, namely, whether the initial level of deep processing would influence the superficial and the opposite. The result of this investigation points to the inclusion of pedagogical and didactic activities that integrate different motivations and initial strategies, leading to the possible adoption of deep approaches to learning since it revealed statistically significant differences in the difference in the scores of the deep/superficial approach and the experiential level. In the case of real challenges, the categories of “attribution of meaning and meaning of studied” and the possibility of “contact with an aspirational context” for their future professional stand out. In this category, the dimensions of autonomy that will be required of them were also revealed when comparing the classroom context of real cases and the future professional context and the impact they may have on the world. Regarding the simulated practice, two categories of response stand out: on the one hand, the motivation associated with the possibility of measuring the results of the decisions taken, an awareness of oneself, and, on the other hand, the additional effort that this practice required for some of the students.Keywords: experiential learning, higher education, mixed methods, reflective learning, marketing
Procedia PDF Downloads 83393 Flexural Properties of Typha Fibers Reinforced Polyester Composite
Authors: Sana Rezig, Yosr Ben Mlik, Mounir Jaouadi, Foued Khoffi, Slah Msahli, Bernard Durand
Abstract:
Increasing interest in environmental concerns, natural fibers are once again being considered as reinforcements for polymer composites. The main objective of this study is to explore another natural resource, Typha fiber; which is renewable without production cost and available abundantly in nature. The aim of this study was to study the flexural properties of composite resin with and without reinforcing Typha leaf and stem fibers. The specimens were made by the hand-lay-up process using polyester matrix. In our work, we focused on the effect of various treatment conditions (sea water, alkali treatment and a combination of the two treatments), as a surface modifier, on the flexural properties of the Typha fibers reinforced polyester composites. Moreover, weight ratio of Typha leaf or stem fibers was investigated. Besides, both fibers from leaf and stem of Typha plant were used to evaluate the reinforcing effect. Another parameter, which is reinforcement structure, was investigated. In fact, a first composite was made with air-laid nonwoven structure of fibers. A second composite was with a mixture of fibers and resin for each kind of treatment. Results show that alkali treatment and combined process provided better mechanical properties of composites in comparison with fiber treated by sea water. The fiber weight ratio influenced the flexural properties of composites. Indeed, a maximum value of flexural strength of 69.8 and 62,32 MPa with flexural modulus of 6.16 and 6.34 GPawas observed respectively for composite reinforced with leaf and stem fibers for 12.6 % fiber weight ratio. For the different treatments carried out, the treatment using caustic soda, whether alone or after retting seawater, show the best results because it improves adhesion between the polyester matrix and the fibers of reinforcement. SEM photographs were made to ascertain the effects of the surface treatment of the fibers. By varying the structure of the fibers of Typha, the reinforcement used in bulk shows more effective results as that used in the non-woven structure. In addition, flexural strength rises with about (65.32 %) in the case of composite reinforced with a mixture of 12.6% leaf fibers and (27.45 %) in the case of a composite reinforced with a nonwoven structure of 12.6 % of leaf fibers. Thus, to better evaluate the effect of the fiber origin, the reinforcing structure, the processing performed and the reinforcement factor on the performance of composite materials, a statistical study was performed using Minitab. Thus, ANOVA was used, and the patterns of the main effects of these parameters and interaction between them were established. Statistical analysis, the fiber treatment and reinforcement structure seem to be the most significant parameters.Keywords: flexural properties, fiber treatment, structure and weight ratio, SEM photographs, Typha leaf and stem fibers
Procedia PDF Downloads 415392 Mild Auditory Perception and Cognitive Impairment in mid-Trimester Pregnancy
Authors: Tahamina Begum, Wan Nor Azlen Wan Mohamad, Faruque Reza, Wan Rosilawati Wan Rosli
Abstract:
To assess auditory perception and cognitive function during pregnancy is necessary as the pregnant women need extra effort for attention mainly for their executive function to maintain their quality of life. This study aimed to investigate neural correlates of cognitive and behavioral processing during mid trimester pregnancy. Event-Related Potentials (ERPs) were studied by using 128-sensor net and PAS or COWA (controlled Oral Word Association), WCST (Wisconsin Card Sorting Test), RAVLTIM (Rey Auditory Verbal and Learning Test: immediate or interference recall, delayed recall (RAVLT DR) and total score (RAVLT TS) were tested for neuropsychology assessment. In total 18 subjects were recruited (n= 9 in each group; control and pregnant group). All participants of the pregnant group were within 16-27 (mid trimester) weeks gestation. Age and education matched control healthy subjects were recruited in the control group. Participants were given a standardized test of auditory cognitive function as auditory oddball paradigm during ERP study. In this paradigm, two different auditory stimuli (standard and target stimuli) were used where subjects counted silently only target stimuli with giving attention by ignoring standard stimuli. Mean differences between target and standard stimuli were compared across groups. N100 (auditory sensory ERP component) and P300 (auditory cognitive ERP component) were recorded at T3, T4, T5, T6, Cz and Pz electrode sites. An equal number of electrodes showed non-significantly shorter amplitude of N100 component (except significantly shorter at T3, P= 0.05) and non-significant longer latencies (except significantly longer latency at T5, P= 0.008) of N100 component in pregnant group comparing control. In case of P300 component, maximum electrode sites showed non-significantly higher amplitudes and equal number of sites showed non-significant shorter latencies in pregnant group comparing control. Neuropsychology results revealed the non-significant higher score of PAS, lower score of WCST, lower score of RAVLTIM and RAVLTDR in pregnant group comparing control. The results of N100 component and RAVLT scores concluded that auditory perception is mildly impaired and P300 component proved very mild cognitive dysfunction with good executive functions in second trimester of pregnancy.Keywords: auditory perception, pregnancy, stimuli, trimester
Procedia PDF Downloads 384391 Detection of Acrylamide Using Liquid Chromatography-Tandem Mass Spectrometry and Quantitative Risk Assessment in Selected Food from Saudi Market
Authors: Sarah A. Alotaibi, Mohammed A. Almutairi, Abdullah A. Alsayari, Adibah M. Almutairi, Somaiah K. Almubayedh
Abstract:
Concerns over the presence of acrylamide in food date back to 2002, when Swedish scientists stated that, in carbohydrate-rich foods, amounts of acrylamide were formed when cooked at high temperatures. Similar findings were reported by other researchers which, consequently, caused major international efforts to investigate dietary exposure and the subsequent health complications in order to properly manage this issue. Due to this issue, in this work, we aim to determine the acrylamide level in different foods (coffee, potato chips, biscuits, and baby food) commonly consumed by the Saudi population. In a total of forty-three samples, acrylamide was detected in twenty-three samples at levels of 12.3 to 2850 µg/kg. In reference to the food groups, the highest concentration of acrylamide was found in coffee samples (<12.3-2850 μg/kg), followed by potato chips (655-1310 μg/kg), then biscuits (23.5-449 μg/kg), whereas the lowest acrylamide level was observed in baby food (<14.75 – 126 μg/kg). Most coffee, biscuits and potato chips products contain high amount of acrylamide content and also the most commonly consumed product. Saudi adults had a mean exposure of acrylamide for coffee, potato, biscuit, and cereal (0.07439, 0.04794, 0.01125, 0.003371 µg/kg-b.w/day), respectively. On the other hand, exposure to acrylamide in Saudi infants and children to the same types of food was (0.1701, 0.1096, 0.02572, 0.00771 µg/kg-b.w/day), respectively. Most groups have a percentile that exceeds the tolerable daily intake (TDI) cancer value (2.6 µg/kg-b.w/day). Overall, the MOE results show that the Saudi population is at high risk of acrylamide-related disease in all food types, and there is a chance of cancer risk in all age groups (all values ˂10,000). Furthermore, it was found that in non-cancer risks, the acrylamide in all tested foods was within the safe limit (˃125), except for potato chips, in which there is a risk for diseases in the population. With potato and coffee as raw materials, additional studies were conducted to assess different factors, including temperature, cocking time, and additives affecting the acrylamide formation in fried potato and roasted coffee, by systematically varying processing temperatures and time values, a mitigation of acrylamide content was achieved when lowering the temperature and decreasing the cooking time. Furthermore, it was shown that the combination of the addition of chitosan and NaCl had a large impact on the formation.Keywords: risk assessment, dietary exposure, MOA, acrylamide, hazard
Procedia PDF Downloads 58390 Development of a Process Method to Manufacture Spreads from Powder Hardstock
Authors: Phakamani Xaba, Robert Huberts, Bilainu Oboirien
Abstract:
It has been over 200 years since margarine was discovered and manufactured using liquid oil, liquified hardstock oils and other oil phase & aqueous phase ingredients. Henry W. Bradley first used vegetable oils in liquid state and around 1871, since then; spreads have been traditionally manufactured using liquified oils. The main objective of this study was to develop a process method to produce spreads using spray dried hardstock fat powders as a structing fats in place of current liquid structuring fats. A high shear mixing system was used to condition the fat phase and the aqueous phase was prepared separately. Using a single scraped surface heat exchanger and pin stirrer, margarine was produced. The process method was developed for to produce spreads with 40%, 50% and 60% fat . The developed method was divided into three steps. In the first step, fat powders were conditioned by melting and dissolving them into liquid oils. The liquified portion of the oils were at 65 °C, whilst the spray dried fat powder was at 25 °C. The two were mixed using a mixing vessel at 900 rpm for 4 minutes. The rest of the ingredients i.e., lecithin, colorant, vitamins & flavours were added at ambient conditions to complete the fat/ oil phase. The water phase was prepared separately by mixing salt, water, preservative, acidifier in the mixing tank. Milk was also separately prepared by pasteurizing it at 79°C prior to feeding it into the aqueous phase. All the water phase contents were chilled to 8 °C. The oil phase and water phase were mixed in a tank, then fed into a single scraped surface heat exchanger. After the scraped surface heat exchanger, the emulsion was fed in a pin stirrer to work the formed crystals and produce margarine. The margarine produced using the developed process had fat levels of 40%, 50% and 60%. The margarine passed all the qualitative, stability, and taste assessments. The scores were 6/10, 7/10 & 7.5/10 for the 40%, 50% & 60% fat spreads, respectively. The success of the trials brought about differentiated knowledge on how to manufacture spreads using non micronized spray dried fat powders as hardstock. Manufacturers do not need to store structuring fats at 80-90°C and even high in winter, instead, they can adapt their processes to use fat powders which need to be stored at 25 °C. The developed process method used one scrape surface heat exchanger instead of the four to five currently used in votator based plants. The use of a single scraped surface heat exchanger translated to about 61% energy savings i.e., 23 kW per ton of product. Furthermore, it was found that the energy saved by implementing separate pasteurization was calculated to be 6.5 kW per ton of product produced.Keywords: margarine emulsion, votator technology, margarine processing, scraped sur, fat powders
Procedia PDF Downloads 90389 A Comparative Analysis of an All-Optical Switch Using Chalcogenide Glass and Gallium Arsenide Based on Nonlinear Photonic Crystal
Authors: Priyanka Kumari Gupta, Punya Prasanna Paltani, Shrivishal Tripathi
Abstract:
This paper proposes a nonlinear photonic crystal ring resonator-based all-optical 2 × 2 switch. The nonlinear Kerr effect is used to evaluate the essential 2 x 2 components of the photonic crystal-based optical switch, including the bar and cross states. The photonic crystal comprises a two-dimensional square lattice of dielectric rods in an air background. In the background air, two different dielectric materials are used for this comparison study separately. Initially with chalcogenide glass rods, then with GaAs rods. For both materials, the operating wavelength, bandgap diagram, operating power intensities, and performance parameters, such as the extinction ratio, insertion loss, and cross-talk of an optical switch, have also been estimated using the plane wave expansion and the finite-difference time-domain method. The chalcogenide glass material (Ag20As32Se48) has a high refractive index of 3.1 which is highly suitable for switching operations. This dielectric material is immersed in an air background with a nonlinear Kerr coefficient of 9.1 x 10-17 m2/W. The resonance wavelength is at 1552 nm, with the operating power intensities at the cross-state and bar state around 60 W/μm2 and 690 W/μm2. The extinction ratio, insertion loss, and cross-talk value for the chalcogenide glass at the cross-state are 17.19 dB, 0.051 dB, and -17.14 dB, and the bar state, the values are 11.32 dB, 0.025 dB, and -11.35 dB respectively. The gallium arsenide (GaAs) dielectric material has a high refractive index of 3.4, a direct bandgap semiconductor material highly preferred nowadays for switching operations. This dielectric material is immersed in an air background with a nonlinear Kerr coefficient of 3.1 x 10-16 m2/W. The resonance wavelength is at 1558 nm, with the operating power intensities at the cross-state and bar state around 110 W/μm2 and 200 W/μm2. The extinction ratio, insertion loss, and cross-talk value for the chalcogenide glass at the cross-state are found to be 3.36.19 dB, 2.436 dB, and -5.8 dB, and for the bar state, the values are 15.60 dB, 0.985 dB, and -16.59 dB respectively. This paper proposes an all-optical 2 × 2 switch based on a nonlinear photonic crystal using a ring resonator. The two-dimensional photonic crystal comprises a square lattice of dielectric rods in an air background. The resonance wavelength is in the range of photonic bandgap. Later, another widely used material, GaAs, is also considered, and its performance is compared with the chalcogenide glass. Our presented structure can be potentially applicable in optical integration circuits and information processing.Keywords: photonic crystal, FDTD, ring resonator, optical switch
Procedia PDF Downloads 77388 Virtual Team Performance: A Transactive Memory System Perspective
Authors: Belbaly Nassim
Abstract:
Virtual teams (VT) initiatives, in which teams are geographically dispersed and communicate via modern computer-driven technologies, have attracted increasing attention from researchers and professionals. The growing need to examine how to balance and optimize VT is particularly important given the exposure experienced by companies when their employees encounter globalization and decentralization pressures to monitor VT performance. Hence, organization is regularly limited due to misalignment between the behavioral capabilities of the team’s dispersed competences and knowledge capabilities and how trust issues interplay and influence these VT dimensions and the effects of such exchanges. In fact, the future success of business depends on the extent to which VTs are managing efficiently their dispersed expertise, skills and knowledge to stimulate VT creativity. Transactive memory system (TMS) may enhance VT creativity using its three dimensons: knowledge specialization, credibility and knowledge coordination. TMS can be understood as a composition of both a structural component residing of individual knowledge and a set of communication processes among individuals. The individual knowledge is shared while being retrieved, applied and the learning is coordinated. TMS is driven by the central concept that the system is built on the distinction between internal and external memory encoding. A VT learns something new and catalogs it in memory for future retrieval and use. TMS uses the role of information technology to explain VT behaviors by offering VT members the possibility to encode, store, and retrieve information. TMS considers the members of a team as a processing system in which the location of expertise both enhances knowledge coordination and builds trust among members over time. We build on TMS dimensions to hypothesize the effects of specialization, coordination, and credibility on VT creativity. In fact, VTs consist of dispersed expertise, skills and knowledge that can positively enhance coordination and collaboration. Ultimately, this team composition may lead to recognition of both who has expertise and where that expertise is located; over time, the team composition may also build trust among VT members over time developing the ability to coordinate their knowledge which can stimulate creativity. We also assess the reciprocal relationship between TMS dimensions and VT creativity. We wish to use TMS to provide researchers with a theoretically driven model that is empirically validated through survey evidence. We propose that TMS provides a new way to enhance and balance VT creativity. This study also provides researchers insight into the use of TMS to influence positively VT creativity. In addition to our research contributions, we provide several managerial insights into how TMS components can be used to increase performance within dispersed VTs.Keywords: virtual team creativity, transactive memory systems, specialization, credibility, coordination
Procedia PDF Downloads 172387 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources
Authors: Mustafa Alhamdi
Abstract:
Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification
Procedia PDF Downloads 150386 Study of Biofouling Wastewater Treatment Technology
Authors: Sangho Park, Mansoo Kim, Kyujung Chae, Junhyuk Yang
Abstract:
The International Maritime Organization (IMO) recognized the problem of invasive species invasion and adopted the "International Convention for the Control and Management of Ships' Ballast Water and Sediments" in 2004, which came into force on September 8, 2017. In 2011, the IMO approved the "Guidelines for the Control and Management of Ships' Biofouling to Minimize the Transfer of Invasive Aquatic Species" to minimize the movement of invasive species by hull-attached organisms and required ships to manage the organisms attached to their hulls. Invasive species enter new environments through ships' ballast water and hull attachment. However, several obstacles to implementing these guidelines have been identified, including a lack of underwater cleaning equipment, regulations on underwater cleaning activities in ports, and difficulty accessing crevices in underwater areas. The shipping industry, which is the party responsible for understanding these guidelines, wants to implement them for fuel cost savings resulting from the removal of organisms attached to the hull, but they anticipate significant difficulties in implementing the guidelines due to the obstacles mentioned above. Robots or people remove the organisms attached to the hull underwater, and the resulting wastewater includes various species of organisms and particles of paint and other pollutants. Currently, there is no technology available to sterilize the organisms in the wastewater or stabilize the heavy metals in the paint particles. In this study, we aim to analyze the characteristics of the wastewater generated from the removal of hull-attached organisms and select the optimal treatment technology. The organisms in the wastewater generated from the removal of the attached organisms meet the biological treatment standard (D-2) using the sterilization technology applied in the ships' ballast water treatment system. The heavy metals and other pollutants in the paint particles generated during removal are treated using stabilization technologies such as thermal decomposition. The wastewater generated is treated using a two-step process: 1) development of sterilization technology through pretreatment filtration equipment and electrolytic sterilization treatment and 2) development of technology for removing particle pollutants such as heavy metals and dissolved inorganic substances. Through this study, we will develop a biological removal technology and an environmentally friendly processing system for the waste generated after removal that meets the requirements of the government and the shipping industry and lays the groundwork for future treatment standards.Keywords: biofouling, ballast water treatment system, filtration, sterilization, wastewater
Procedia PDF Downloads 109385 Development of a Table-Top Composite Wire Fabrication System for Additive Manufacturing
Authors: Krishna Nand, Mohammad Taufik
Abstract:
Fused Filament Fabrication (FFF) is one of the most popular additive manufacturing (AM) technology. In FFF technology, a wire form material (filament) is fed inside a heated chamber, where it gets converted into semi-solid form and extruded out of a nozzle to be deposited on the build platform to fabricate the part. FFF technology is expanding and covering the market at a very rapid rate, so the need of raw materials for 3D printing is also increasing. The cost of 3D printing is directly affected by filament cost. To make 3D printing more economic, a compact and portable filament/wire extrusion system is needed. Wire extrusion systems to extrude ordinary wire/filament made of a single material are available in the market. However, extrusion system to make a composite wire/filament are not available. Hence, in this study, initial efforts have been made to develop a table-top composite wire extruder. The developed system is consisted of mechanical parts, electronics parts, and a control system. A multiple channel hopper, extrusion screw, melting chamber and nozzle, cooling zone, and spool winder are some mechanical parts. While motors, heater, temperature sensor, cooling fans are some electronics parts, which are used to develop this system. A control board has been used to control the various process parameters like – temperature and speed of motors. For the production of composite wire/filament, two different materials could be fed through two channels of hopper, which will be mixed and carried to the heated zone by extrusion screw. The extrusion screw is rotated by a motor, and the speed of this motor will be controlled by the controller as per the requirement of material extrusion rate. In the heated zone, the material will melt with the help of a heating element and extruded out of the nozzle in the form of wire. The developed system occupies less floor space due to the vertical orientation of its heating chamber. It is capable to extrude ordinary filament as well as composite filament, which are compatible with 3D printers available in the market. Further, the developed system could be employed in the research and development of materials, processing, and characterization for 3D printer. The developed system presented in this study could be a better choice for hobbyists and researchers dealing with the fused filament fabrication process to reduce the 3D printing cost significantly by recycling the waste material into 3D printer feed material. Further, it could also be explored as a better alternative for filament production at the commercial level.Keywords: additive manufacturing, 3D Printing, filament extrusion, pellet extrusion
Procedia PDF Downloads 167384 Job Resource, Personal Resource, Engagement and Performance with Balanced Score Card in the Integrated Textile Companies in Indonesia
Authors: Nurlaila Effendy
Abstract:
Companies in Asia face a number of constraints in tight competitiveness in ASEAN Economic Community 2015 and globalization. An economic capitalism system as an integral part of globalization processing brings broad impacts. They need to improve business performance in globalization and ASEAN Economic Community. Organizational development has quite clearly demonstrated that aligning individual’s personal goals with the goals of the organization translates into measurable and sustained performance improvement. Human capital is a key to achieve company performance. Employee Engagement (EE) creates and expresses themselves physically, cognitively and emotionally to achieve company goals and individual goals. One will experience a total involvement when they undertake their jobs and feel a self integration to their job and organization. A leader plays key role in attaining the goals and objectives of a company/organization. Any Manager in a company needs to have leadership competence and global mindset. As one the of positive organizational behavior developments, psychological capital (PsyCap) is assumed to be one of the most important capitals in the global mindset, in addition to intellectual capital and social capital. Textile companies also need to face a number of constraints in tight competitiveness in regional and global. This research involved 42 managers in two textiles and a spinning companies in a group, in Central Java, Indonesia. It is a quantitative research with Partial Least Squares (PLS) studying job resource (Social Support & Organizational Climate) and Personal Resource (4 dimensions of Psychological Capital & Leadership Competence) as prediction of Employee Engagement, also Employee Engagement and leadership competence as prediction of leader’s performance. The performance of a leader is measured by means of achievement on objective strategies in terms of 4 perspectives (financial and non-financial perspectives) in a Balanced Score Card (BSC). It took one year during a business plan of year 2014, from January to December 2014. The result of this research is there is correlation between Job Resource (coefficient value of Social Support is 0.036 & coefficient value of organizational climate is 0.220) and Personal Resource (coefficient value of PsyCap is 0.513 & coefficient value of Leadership Competence is 0.249) with employee engagement. There is correlation between employee engagement (coefficient value is 0.279) and leadership competence (coefficient value is 0.581) with performance.Keywords: organizational climate, social support, psychological capital leadership competence, employee engagement, performance, integrated textile companies
Procedia PDF Downloads 433383 Effect of Low Calorie Sweeteners on Chemical, Sensory Evaluation and Antidiabetic of Pumpkin Jam Fortified with Soybean
Authors: Amnah M. A. Alsuhaibani, Amal N. Al-Kuraieef
Abstract:
Introduction: In the recent decades, production of low-calorie jams is needed for diabetics that comprise low calorie fruits and low calorie sweeteners. Object: the research aimed to prepare low calorie formulated pumpkin jams (fructose, stevia and aspartame) incorporated with soy bean and evaluate the jams through chemical analysis and sensory evaluation after storage for six month. Moreover, the possible effect of consumption of low calorie jams on diabetic rats was investigated. Methods: Five formulas of pumpkin jam with different sucrose, fructose, stevia and aspartame sweeteners and soy bean were prepared and stored at 10 oC for six month compared to ordinary pumpkin jam. Chemical composition and sensory evaluation of formulated jams were evaluated at zero time, 3 month and 6 month of storage. The best three acceptable pumpkin jams were taken for biological study on diabetic rats. Rats divided into group (1) served as negative control and streptozotocin induce diabetes four rat groups that were positive diabetic control (group2), rats fed on standard diet with 10% sucrose soybean jam, fructose soybean jam and stevia soybean jam (group 3, 4&5), respectively. Results: The content of protein, fat, ash and fiber were increased but carbohydrate was decreased in low calorie formulated pumpkin jams compared to ordinary jam. Production of aspartame soybean pumpkin jam had lower score of all sensory attributes compared to other jam then followed by stevia soybean Pumpkin jam. Using non nutritive sweeteners (stevia & aspartame) with soybean in processing jam could lower the score of the sensory attributes after storage for 3 and 6 months. The highest score was recorded for sucrose and fructose soybean jams followed by stevia soybean jam while aspartame soybean jam recorded the lowest score significantly. The biological evaluation showed a significant improvement in body weight and FER of rats after six weeks of consumption of standard diet with jams (Group 3,4&5) compared to Group1. Rats consumed 10% low calorie jam with nutrient sweetener (fructose) and non nutrient sweetener (stevia) soybean jam (group 4& 5) showed significant decrease in glucose level, liver function enzymes activity, and liver cholesterol & total lipids in addition of significant increase of insulin and glycogen compared to the levels of group 2. Conclusion: low calorie pumpkin jams can be prepared by low calorie sweeteners and soybean and also storage for 3 months at 10oC without change sensory attributes. Consumption of stevia pumpkin jam fortified with soybean had positive health effects on streptozoticin induced diabetes in rats.Keywords: pumpkin jam, HFCS, aspartame, stevia, storage
Procedia PDF Downloads 183382 Identification of Damage Mechanisms in Interlock Reinforced Composites Using a Pattern Recognition Approach of Acoustic Emission Data
Authors: M. Kharrat, G. Moreau, Z. Aboura
Abstract:
The latest advances in the weaving industry, combined with increasingly sophisticated means of materials processing, have made it possible to produce complex 3D composite structures. Mainly used in aeronautics, composite materials with 3D architecture offer better mechanical properties than 2D reinforced composites. Nevertheless, these materials require a good understanding of their behavior. Because of the complexity of such materials, the damage mechanisms are multiple, and the scenario of their appearance and evolution depends on the nature of the exerted solicitations. The AE technique is a well-established tool for discriminating between the damage mechanisms. Suitable sensors are used during the mechanical test to monitor the structural health of the material. Relevant AE-features are then extracted from the recorded signals, followed by a data analysis using pattern recognition techniques. In order to better understand the damage scenarios of interlock composite materials, a multi-instrumentation was set-up in this work for tracking damage initiation and development, especially in the vicinity of the first significant damage, called macro-damage. The deployed instrumentation includes video-microscopy, Digital Image Correlation, Acoustic Emission (AE) and micro-tomography. In this study, a multi-variable AE data analysis approach was developed for the discrimination between the different signal classes representing the different emission sources during testing. An unsupervised classification technique was adopted to perform AE data clustering without a priori knowledge. The multi-instrumentation and the clustered data served to label the different signal families and to build a learning database. This latter is useful to construct a supervised classifier that can be used for automatic recognition of the AE signals. Several materials with different ingredients were tested under various solicitations in order to feed and enrich the learning database. The methodology presented in this work was useful to refine the damage threshold for the new generation materials. The damage mechanisms around this threshold were highlighted. The obtained signal classes were assigned to the different mechanisms. The isolation of a 'noise' class makes it possible to discriminate between the signals emitted by damages without resorting to spatial filtering or increasing the AE detection threshold. The approach was validated on different material configurations. For the same material and the same type of solicitation, the identified classes are reproducible and little disturbed. The supervised classifier constructed based on the learning database was able to predict the labels of the classified signals.Keywords: acoustic emission, classifier, damage mechanisms, first damage threshold, interlock composite materials, pattern recognition
Procedia PDF Downloads 155381 Radiomics: Approach to Enable Early Diagnosis of Non-Specific Breast Nodules in Contrast-Enhanced Magnetic Resonance Imaging
Authors: N. D'Amico, E. Grossi, B. Colombo, F. Rigiroli, M. Buscema, D. Fazzini, G. Cornalba, S. Papa
Abstract:
Purpose: To characterize, through a radiomic approach, the nature of nodules considered non-specific by expert radiologists, recognized in magnetic resonance mammography (MRm) with T1-weighted (T1w) sequences with paramagnetic contrast. Material and Methods: 47 cases out of 1200 undergoing MRm, in which the MRm assessment gave uncertain classification (non-specific nodules), were admitted to the study. The clinical outcome of the non-specific nodules was later found through follow-up or further exams (biopsy), finding 35 benign and 12 malignant. All MR Images were acquired at 1.5T, a first basal T1w sequence and then four T1w acquisitions after the paramagnetic contrast injection. After a manual segmentation of the lesions, done by a radiologist, and the extraction of 150 radiomic features (30 features per 5 subsequent times) a machine learning (ML) approach was used. An evolutionary algorithm (TWIST system based on KNN algorithm) was used to subdivide the dataset into training and validation test and to select features yielding the maximal amount of information. After this pre-processing, different machine learning systems were applied to develop a predictive model based on a training-testing crossover procedure. 10 cases with a benign nodule (follow-up older than 5 years) and 18 with an evident malignant tumor (clear malignant histological exam) were added to the dataset in order to allow the ML system to better learn from data. Results: NaiveBayes algorithm working on 79 features selected by a TWIST system, resulted to be the best performing ML system with a sensitivity of 96% and a specificity of 78% and a global accuracy of 87% (average values of two training-testing procedures ab-ba). The results showed that in the subset of 47 non-specific nodules, the algorithm predicted the outcome of 45 nodules which an expert radiologist could not identify. Conclusion: In this pilot study we identified a radiomic approach allowing ML systems to perform well in the diagnosis of a non-specific nodule at MR mammography. This algorithm could be a great support for the early diagnosis of malignant breast tumor, in the event the radiologist is not able to identify the kind of lesion and reduces the necessity for long follow-up. Clinical Relevance: This machine learning algorithm could be essential to support the radiologist in early diagnosis of non-specific nodules, in order to avoid strenuous follow-up and painful biopsy for the patient.Keywords: breast, machine learning, MRI, radiomics
Procedia PDF Downloads 267380 Chromium (VI) Removal from Aqueous Solutions by Ion Exchange Processing Using Eichrom 1-X4, Lewatit Monoplus M800 and Lewatit A8071 Resins: Batch Ion Exchange Modeling
Authors: Havva Tutar Kahraman, Erol Pehlivan
Abstract:
In recent years, environmental pollution by wastewater rises very critically. Effluents discharged from various industries cause this challenge. Different type of pollutants such as organic compounds, oxyanions, and heavy metal ions create this threat for human bodies and all other living things. However, heavy metals are considered one of the main pollutant groups of wastewater. Therefore, this case creates a great need to apply and enhance the water treatment technologies. Among adopted treatment technologies, adsorption process is one of the methods, which is gaining more and more attention because of its easy operations, the simplicity of design and versatility. Ion exchange process is one of the preferred methods for removal of heavy metal ions from aqueous solutions. It has found widespread application in water remediation technologies, during the past several decades. Therefore, the purpose of this study is to the removal of hexavalent chromium, Cr(VI), from aqueous solutions. Cr(VI) is considered as a well-known highly toxic metal which modifies the DNA transcription process and causes important chromosomic aberrations. The treatment and removal of this heavy metal have received great attention to maintaining its allowed legal standards. The purpose of the present paper is an attempt to investigate some aspects of the use of three anion exchange resins: Eichrom 1-X4, Lewatit Monoplus M800 and Lewatit A8071. Batch adsorption experiments were carried out to evaluate the adsorption capacity of these three commercial resins in the removal of Cr(VI) from aqueous solutions. The chromium solutions used in the experiments were synthetic solutions. The parameters that affect the adsorption, solution pH, adsorbent concentration, contact time, and initial Cr(VI) concentration, were performed at room temperature. High adsorption rates of metal ions for the three resins were reported at the onset, and then plateau values were gradually reached within 60 min. The optimum pH for Cr(VI) adsorption was found as 3.0 for these three resins. The adsorption decreases with the increase in pH for three anion exchangers. The suitability of Freundlich, Langmuir and Scatchard models were investigated for Cr(VI)-resin equilibrium. Results, obtained in this study, demonstrate excellent comparability between three anion exchange resins indicating that Eichrom 1-X4 is more effective and showing highest adsorption capacity for the removal of Cr(VI) ions. Investigated anion exchange resins in this study can be used for the efficient removal of chromium from water and wastewater.Keywords: adsorption, anion exchange resin, chromium, kinetics
Procedia PDF Downloads 260379 Cognitive Control Moderates the Concurrent Effect of Autistic and Schizotypal Traits on Divergent Thinking
Authors: Julie Ramain, Christine Mohr, Ahmad Abu-Akel
Abstract:
Divergent thinking—a cognitive component of creativity—and particularly the ability to generate unique and novel ideas, has been linked to both autistic and schizotypal traits. However, to our knowledge, the concurrent effect of these trait dimensions on divergent thinking has not been investigated. Moreover, it has been suggested that creativity is associated with different types of attention and cognitive control, and consequently how information is processed in a given context. Intriguingly, consistent with the diametric model, autistic and schizotypal traits have been associated with contrasting attentional and cognitive control styles. Positive schizotypal traits have been associated with reactive cognitive control and attentional flexibility, while autistic traits have been associated with proactive cognitive control and the increased focus of attention. The current study investigated the relationship between divergent thinking, autistic and schizotypal traits and cognitive control in a non-clinical sample of 83 individuals (Males = 42%; Mean age = 22.37, SD = 2.93), sufficient to detect a medium effect size. Divergent thinking was evaluated in an adapted version of-of the Figural Torrance Test of Creative Thinking. Crucially, since we were interested in testing divergent thinking productivity across contexts, participants were asked to generate items from basic shapes in four different contexts. The variance of the proportion of unique to total responses across contexts represented a measure of context adaptability, with lower variance indicating increased context adaptability. Cognitive control was estimated with the Behavioral Proactive Index of the AX-CPT task, with higher scores representing the ability to actively maintain goal-relevant information in a sustained/anticipatory manner. Autistic and schizotypal traits were assessed with the Autism Quotient (AQ) and the Community Assessment of Psychic Experiences (CAPE-42). Generalized linear models revealed a 3-way interaction of autistic and positive schizotypal traits, and proactive cognitive control, associated with increased context adaptability. Specifically, the concurrent effect of autistic and positive schizotypal traits on increased context adaptability was moderated by the level of proactive control and was only significant when proactive cognitive control was high. Our study reveals that autistic and positive schizotypal traits interactively facilitate the capacity to generate unique ideas across various contexts. However, this effect depends on cognitive control mechanisms indicative of the ability to proactively maintain attention when needed. The current results point to a unique profile of divergent thinkers who have the ability to respectively tap both systematic and flexible processing modes within and across contexts. This is particularly intriguing as such combination of phenotypes has been proposed to explain the genius of Beethoven, Nash, and Newton.Keywords: autism, schizotypy, creativity, cognitive control
Procedia PDF Downloads 137378 Case Study of Mechanised Shea Butter Production in South-Western Nigeria Using the LCA Approach from Gate-to-Gate
Authors: Temitayo Abayomi Ewemoje, Oluwamayowa Oluwafemi Oluwaniyi
Abstract:
Agriculture and food processing, industry are among the largest industrial sectors that uses large amount of energy. Thus, a larger amount of gases from their fuel combustion technologies is being released into the environment. The choice of input energy supply not only directly having affects the environment, but also poses a threat to human health. The study was therefore designed to assess each unit production processes in order to identify hotspots using life cycle assessments (LCA) approach in South-western Nigeria. Data such as machine power rating, operation duration, inputs and outputs of shea butter materials for unit processes obtained at site were used to modelled Life Cycle Impact Analysis on GaBi6 (Holistic Balancing) software. Four scenarios were drawn for the impact assessments. Material sourcing from Kaiama, Scenarios 1, 3 and Minna Scenarios 2, 4 but different heat supply sources (Liquefied Petroleum Gas ‘LPG’ Scenarios 1, 2 and 10.8 kW Diesel Heater, scenarios 3, 4). Modelling of shea butter production on GaBi6 was for 1kg functional unit of shea butter produced and the Tool for the Reduction and Assessment of Chemical and other Environmental Impacts (TRACI) midpoint assessment was tool used to was analyse the life cycle inventories of the four scenarios. Eight categories in all four Scenarios were observed out of which three impact categories; Global Warming Potential (GWP) (0.613, 0.751, 0.661, 0.799) kg CO2¬-Equiv., Acidification Potential (AP) (0.112, 0.132, 0.129, 0.149) kg H+ moles-Equiv., and Smog (0.044, 0.059, 0.049, 0.063) kg O3-Equiv., categories had the greater impacts on the environment in Scenarios 1-4 respectively. Impacts from transportation activities was also seen to contribute more to these environmental impact categories due to large volume of petrol combusted leading to releases of gases such as CO2, CH4, N2O, SO2, and NOx into the environment during the transportation of raw shea kernel purchased. The ratio of transportation distance from Minna and Kaiama to production site was approximately 3.5. Shea butter unit processes with greater impacts in all categories was the packaging, milling and with the churning processes in ascending order of magnitude was identified as hotspots that may require attention. From the 1kg shea butter functional unit, it was inferred that locating production site at the shortest travelling distance to raw material sourcing and combustion of LPG for heating would reduce all the impact categories assessed on the environment.Keywords: GaBi6, Life cycle assessment, shea butter production, TRACI
Procedia PDF Downloads 323377 Investigation of Polypropylene Composite Films With Carbon Nanotubes and the Role of β Nucleating Agents for the Improvement of Their Water Vapor Permeability
Authors: Glykeria A. Visvini, George N. Mathioudakis, Amaia Soto Beobide, Aris E. Giannakas, George A. Voyiatzis
Abstract:
Polymeric nanocomposites have generated considerable interest in both academic research and industry because their properties can be tailored by adjusting the type & concentration of nano-inclusions, resulting in complementary and adaptable characteristics. The exceptional and/or unique properties of the nanocomposites, including the high mechanical strength and stiffness, the ease of processing, and their lightweight nature, are attributed to the high surface area, the electrical and/or thermal conductivity of the nano-fillers, which make them appealing materials for a wide range of engineering applications. Polymeric «breathable» membranes enabling water vapor permeability (WVP) can be designed either by using micro/nano-fillers with the ability to interrupt the continuity of the polymer phase generating micro/nano-porous structures or/and by creating micro/nano-pores into the composite material by uniaxial/biaxial stretching. Among the nanofillers, carbon nanotubes (CNTs) exhibit particular high WVP and for this reason, they have already been proposed for gas separation membranes. In a similar context, they could prove to be promising alternative/complementary filler nano-materials, for the development of "breathable" products. Polypropylene (PP) is a commonly utilized thermoplastic polymer matrix in the development of composite films, due to its easy processability and low price, combined with its good chemical & physical properties. PP is known to present several crystalline phases (α, β and γ), depending on the applied treatment process, which have a significant impact on its final properties, particularly in terms of WVP. Specifically, the development of the β-phase in PP in combination with stretching is anticipated to modify the crystalline behavior and extend the microporosity of the polymer matrix exhibiting enhanced WVP. The primary objective of this study is to develop breathable nano-carbon based (functionalized MWCNTs) PP composite membranes, potentially also avoiding the stretching process. This proposed alternative is expected to have a better performance/cost ratio over current stretched PP/CaCO3 composite benchmark membranes. The focus is to investigate the impact of both β-nucleator(s) and nano-carbon fillers on water vapor transmission rate properties of relevant PP nanocomposites.Keywords: carbon nanotubes, nanocomposites, nucleating agents, polypropylene, water vapor permeability
Procedia PDF Downloads 73376 Rapid Soil Classification Using Computer Vision with Electrical Resistivity and Soil Strength
Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, P. L. Goh, Grace H. B. Foo, M. L. Leong
Abstract:
This paper presents the evaluation of various soil testing methods such as the four-probe soil electrical resistivity method and cone penetration test (CPT) that can complement a newly developed novel rapid soil classification scheme using computer vision, to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from the local construction industry are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labor-intensive. Thus, a rapid classification method is needed at the SGs. Four-probe soil electrical resistivity and CPT were evaluated for their feasibility as suitable additions to the computer vision system to further develop this innovative non-destructive and instantaneous classification method. The computer vision technique comprises soil image acquisition using an industrial-grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the following three items were targeted to be added onto the computer vision scheme: the apparent electrical resistivity of soil (ρ) measured using a set of four probes arranged in Wenner’s array, the soil strength measured using a modified mini cone penetrometer, and w measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay,” and a mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay” and are feasible as complementing methods to the computer vision system.Keywords: computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification
Procedia PDF Downloads 239375 Robust Processing of Antenna Array Signals under Local Scattering Environments
Authors: Ju-Hong Lee, Ching-Wei Liao
Abstract:
An adaptive array beamformer is designed for automatically preserving the desired signals while cancelling interference and noise. Providing robustness against model mismatches and tracking possible environment changes calls for robust adaptive beamforming techniques. The design criterion yields the well-known generalized sidelobe canceller (GSC) beamformer. In practice, the knowledge of the desired steering vector can be imprecise, which often occurs due to estimation errors in the DOA of the desired signal or imperfect array calibration. In these situations, the SOI is considered as interference, and the performance of the GSC beamformer is known to degrade. This undesired behavior results in a reduction of the array output signal-to-interference plus-noise-ratio (SINR). Therefore, it is worth developing robust techniques to deal with the problem due to local scattering environments. As to the implementation of adaptive beamforming, the required computational complexity is enormous when the array beamformer is equipped with massive antenna array sensors. To alleviate this difficulty, a generalized sidelobe canceller (GSC) with partially adaptivity for less adaptive degrees of freedom and faster adaptive response has been proposed in the literature. Unfortunately, it has been shown that the conventional GSC-based adaptive beamformers are usually very sensitive to the mismatch problems due to local scattering situations. In this paper, we present an effective GSC-based beamformer against the mismatch problems mentioned above. The proposed GSC-based array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. We utilize the predefined steering vector and a presumed angle tolerance range to carry out the required estimation for obtaining an appropriate steering vector. A matrix associated with the direction vector of signal sources is first created. Then projection matrices related to the matrix are generated and are utilized to iteratively estimate the actual direction vector of the desired signal. As a result, the quiescent weight vector and the required signal blocking matrix required for performing adaptive beamforming can be easily found. By utilizing the proposed GSC-based beamformer, we find that the performance degradation due to the considered local scattering environments can be effectively mitigated. To further enhance the beamforming performance, a signal subspace projection matrix is also introduced into the proposed GSC-based beamformer. Several computer simulation examples show that the proposed GSC-based beamformer outperforms the existing robust techniques.Keywords: adaptive antenna beamforming, local scattering, signal blocking, steering mismatch
Procedia PDF Downloads 112374 Extraction of Urban Building Damage Using Spectral, Height and Corner Information
Authors: X. Wang
Abstract:
Timely and accurate information on urban building damage caused by earthquake is important basis for disaster assessment and emergency relief. Very high resolution (VHR) remotely sensed imagery containing abundant fine-scale information offers a large quantity of data for detecting and assessing urban building damage in the aftermath of earthquake disasters. However, the accuracy obtained using spectral features alone is comparatively low, since building damage, intact buildings and pavements are spectrally similar. Therefore, it is of great significance to detect urban building damage effectively using multi-source data. Considering that in general height or geometric structure of buildings change dramatically in the devastated areas, a novel multi-stage urban building damage detection method, using bi-temporal spectral, height and corner information, was proposed in this study. The pre-event height information was generated using stereo VHR images acquired from two different satellites, while the post-event height information was produced from airborne LiDAR data. The corner information was extracted from pre- and post-event panchromatic images. The proposed method can be summarized as follows. To reduce the classification errors caused by spectral similarity and errors in extracting height information, ground surface, shadows, and vegetation were first extracted using the post-event VHR image and height data and were masked out. Two different types of building damage were then extracted from the remaining areas: the height difference between pre- and post-event was used for detecting building damage showing significant height change; the difference in the density of corners between pre- and post-event was used for extracting building damage showing drastic change in geometric structure. The initial building damage result was generated by combining above two building damage results. Finally, a post-processing procedure was adopted to refine the obtained initial result. The proposed method was quantitatively evaluated and compared to two existing methods in Port au Prince, Haiti, which was heavily hit by an earthquake in January 2010, using pre-event GeoEye-1 image, pre-event WorldView-2 image, post-event QuickBird image and post-event LiDAR data. The results showed that the method proposed in this study significantly outperformed the two comparative methods in terms of urban building damage extraction accuracy. The proposed method provides a fast and reliable method to detect urban building collapse, which is also applicable to relevant applications.Keywords: building damage, corner, earthquake, height, very high resolution (VHR)
Procedia PDF Downloads 213