Search results for: text processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4798

Search results for: text processing

4048 A Generative Pretrained Transformer-Based Question-Answer Chatbot and Phantom-Less Quantitative Computed Tomography Bone Mineral Density Measurement System for Osteoporosis

Authors: Mian Huang, Chi Ma, Junyu Lin, William Lu

Abstract:

Introduction: Bone health attracts more attention recently and an intelligent question and answer (QA) chatbot for osteoporosis is helpful for science popularization. With Generative Pretrained Transformer (GPT) technology developing, we build an osteoporosis corpus dataset and then fine-tune LLaMA, a famous open-source GPT foundation large language model(LLM), on our self-constructed osteoporosis corpus. Evaluated by clinical orthopedic experts, our fine-tuned model outperforms vanilla LLaMA on osteoporosis QA task in Chinese. Three-dimensional quantitative computed tomography (QCT) measured bone mineral density (BMD) is considered as more accurate than DXA for BMD measurement in recent years. We develop an automatic Phantom-less QCT(PL-QCT) that is more efficient for BMD measurement since no need of an external phantom for calibration. Combined with LLM on osteoporosis, our PL-QCT provides efficient and accurate BMD measurement for our chatbot users. Material and Methods: We build an osteoporosis corpus containing about 30,000 Chinese literatures whose titles are related to osteoporosis. The whole process is done automatically, including crawling literatures in .pdf format, localizing text/figure/table region by layout segmentation algorithm and recognizing text by OCR algorithm. We train our model by continuous pre-training with Low-rank Adaptation (LoRA, rank=10) technology to adapt LLaMA-7B model to osteoporosis domain, whose basic principle is to mask the next word in the text and make the model predict that word. The loss function is defined as cross-entropy between the predicted and ground-truth word. Experiment is implemented on single NVIDIA A800 GPU for 15 days. Our automatic PL-QCT BMD measurement adopt AI-associated region-of-interest (ROI) generation algorithm for localizing vertebrae-parallel cylinder in cancellous bone. Due to no phantom for BMD calibration, we calculate ROI BMD by CT-BMD of personal muscle and fat. Results & Discussion: Clinical orthopaedic experts are invited to design 5 osteoporosis questions in Chinese, evaluating performance of vanilla LLaMA and our fine-tuned model. Our model outperforms LLaMA on over 80% of these questions, understanding ‘Expert Consensus on Osteoporosis’, ‘QCT for osteoporosis diagnosis’ and ‘Effect of age on osteoporosis’. Detailed results are shown in appendix. Future work may be done by training a larger LLM on the whole orthopaedics with more high-quality domain data, or a multi-modal GPT combining and understanding X-ray and medical text for orthopaedic computer-aided-diagnosis. However, GPT model gives unexpected outputs sometimes, such as repetitive text or seemingly normal but wrong answer (called ‘hallucination’). Even though GPT give correct answers, it cannot be considered as valid clinical diagnoses instead of clinical doctors. The PL-QCT BMD system provided by Bone’s QCT(Bone’s Technology(Shenzhen) Limited) achieves 0.1448mg/cm2(spine) and 0.0002 mg/cm2(hip) mean absolute error(MAE) and linear correlation coefficient R2=0.9970(spine) and R2=0.9991(hip)(compared to QCT-Pro(Mindways)) on 155 patients in three-center clinical trial in Guangzhou, China. Conclusion: This study builds a Chinese osteoporosis corpus and develops a fine-tuned and domain-adapted LLM as well as a PL-QCT BMD measurement system. Our fine-tuned GPT model shows better capability than LLaMA model on most testing questions on osteoporosis. Combined with our PL-QCT BMD system, we are looking forward to providing science popularization and early morning screening for potential osteoporotic patients.

Keywords: GPT, phantom-less QCT, large language model, osteoporosis

Procedia PDF Downloads 70
4047 Food Waste and Sustainable Management

Authors: Farhana Nosheen, Moeez Ahmad

Abstract:

Throughout the food chain, the food waste from initial agricultural production to final household consumption has become a serious concern for global sustainability because of its adverse impacts on food security, natural resources, the environment, and human health. About a third of tomatoes (Lycopersicon esculentum L.) delivered to processing plants end as processing waste. The amount of such waste material is estimated to have increased with the emergence of mechanical harvesting. Experiments were made to determine the nutritional profile and antioxidant activity of tomato processing waste and to explore the bioactive compound in tomato waste, i.e., Lycopene. Tomato Variety of ‘SAHARA F1’ was used to make tomato waste. The tomatoes were properly cleaned, and then unwanted impurities were removed properly. The tomatoes were blanched at 90 ℃ for 5 minutes. After which, the skin of the tomatoes was removed, and the remaining part passed through the electric pulper. The pulp and seeds were collected separately. The seeds and skin of tomatoes were mixed and saved in a sterilized jar. The samples of tomato waste were found to contain 89.11±0.006 g/100g moisture, 10.13±0.115 g/100g protein, 2.066±0.57 g/100g fat, 4.81±0.10 g/100g crude fiber, and 4.06±0.057 g/100g ash and NFE 78.92±0.066 g/100g. The results confirmed that tomato waste contains a considerable amount of Lycopene 51.0667±0.00577 mg/100g and exhibited good antioxidant properties. Total phenolics showed average contents of 122.9600±0.01000 mg GAE/100g, of which flavonoids accounted for 41.5367±0.00577 mg QE/100g. Antioxidant activity of tomato processing waste was found 0.6833±0.00577 mmol Trolox/100g. Unsaturated fatty acids represent the major portion of total fatty acids, Linoleic acid being the major one. The mineral content of tomato waste showed a good amount of potassium 3030.1767 mg/100g and calcium 131.80 mg/100g, respectively were present in it. These findings suggest that tomato processing waste is rich in nutrients, antioxidants, fatty acids, and minerals. I recommend that this waste should be sun-dried to be used in the combination of feed of the animals. It can also be used in making some other products like lycopene tea or several other health-beneficial products.

Keywords: food waste, tomato, bioactive compound, sustainable management

Procedia PDF Downloads 107
4046 Low-Cost Image Processing System for Evaluating Pavement Surface Distress

Authors: Keerti Kembhavi, M. R. Archana, V. Anjaneyappa

Abstract:

Most asphalt pavement condition evaluation use rating frameworks in which asphalt pavement distress is estimated by type, extent, and severity. Rating is carried out by the pavement condition rating (PCR), which is tedious and expensive. This paper presents the development of a low-cost technique for image pavement distress analysis that permits the identification of pothole and cracks. The paper explores the application of image processing tools for the detection of potholes and cracks. Longitudinal cracking and pothole are detected using Fuzzy-C- Means (FCM) and proceeded with the Spectral Theory algorithm. The framework comprises three phases, including image acquisition, processing, and extraction of features. A digital camera (Gopro) with the holder is used to capture pavement distress images on a moving vehicle. FCM classifier and Spectral Theory algorithms are used to compute features and classify the longitudinal cracking and pothole. The Matlab2016Ra Image preparing tool kit utilizes performance analysis to identify the viability of pavement distress on selected urban stretches of Bengaluru city, India. The outcomes of image evaluation with the utilization semi-computerized image handling framework represented the features of longitudinal crack and pothole with an accuracy of about 80%. Further, the detected images are validated with the actual dimensions, and it is seen that dimension variability is about 0.46. The linear regression model y=1.171x-0.155 is obtained using the existing and experimental / image processing area. The R2 correlation square obtained from the best fit line is 0.807, which is considered in the linear regression model to be ‘large positive linear association’.

Keywords: crack detection, pothole detection, spectral clustering, fuzzy-c-means

Procedia PDF Downloads 181
4045 The Role of Structure Input in Pi in the Acquisition of English Relative Clauses by L1 Saudi Arabic Speakers

Authors: Faraj Alhamami

Abstract:

The effects of classroom input through structured input activities have been addressing two main lines of inquiry: (1) measuring the effects of structured input activities as a possible causative factor of PI and (2) comparing structured input practice versus other types of instruction or no-training controls. This line of research, the main purpose of this classroom-based research, was to establish which type of activities is the most effective in processing instruction, whether it is the explicit information component and referential activities only or the explicit information component and affective activities only or a combination of the two. The instruments were: a) grammatical judgment task, b) Picture-cued task, and c) a translation task as pre-tests, post-tests and delayed post-tests seven weeks after the intervention. While testing is ongoing, preliminary results shows that the examination of participants' pre-test performance showed that all five groups - the processing instruction including both activities (RA), Traditional group (TI), Referential group (R), Affective group (A), and Control group - performed at a comparable chance or baseline level across the three outcome measures. However, at the post-test stage, the RA, TI, R, and A groups demonstrated significant improvement compared to the Control group in all tasks. Furthermore, significant difference was observed among PI groups (RA, R, and A) at post-test and delayed post-test on some of the tasks when compared to traditional group. Therefore, the findings suggest that the use of the sole application and/or the combination of the structured input activities has succeeded in helping Saudi learners of English make initial form-meaning connections and acquire RRCs in the short and the long term.

Keywords: input processing, processing instruction, MOGUL, structure input activities

Procedia PDF Downloads 77
4044 Comparing the Durability of Saudi Silica Sands for Use in Foundry Processing

Authors: Mahdi Alsagour, Sam Ramrattan

Abstract:

This paper was developed to investigate two types of sands from the Kingdom of Saudi Arabia (KSA) for potential use in the global metal casting industry. Four types of sands were selected for study, two of the sand systems investigated are natural sands from the KSA. The third sand sample is a heat processed synthetic sand and the last sample is commercially available US silica sand that is used as a control in the study. The purpose of this study is to define the durability of the four sand systems selected for foundry usage. Additionally, chemical analysis of the sand systems is presented before and after elevated temperature exposure. Results show that Saudi silica sands are durable and can be used in foundry processing.

Keywords: alternative molding media, foundry sand, reclamation, silica sand, specialty sand

Procedia PDF Downloads 135
4043 Prompt Design for Code Generation in Data Analysis Using Large Language Models

Authors: Lu Song Ma Li Zhi

Abstract:

With the rapid advancement of artificial intelligence technology, large language models (LLMs) have become a milestone in the field of natural language processing, demonstrating remarkable capabilities in semantic understanding, intelligent question answering, and text generation. These models are gradually penetrating various industries, particularly showcasing significant application potential in the data analysis domain. However, retraining or fine-tuning these models requires substantial computational resources and ample downstream task datasets, which poses a significant challenge for many enterprises and research institutions. Without modifying the internal parameters of the large models, prompt engineering techniques can rapidly adapt these models to new domains. This paper proposes a prompt design strategy aimed at leveraging the capabilities of large language models to automate the generation of data analysis code. By carefully designing prompts, data analysis requirements can be described in natural language, which the large language model can then understand and convert into executable data analysis code, thereby greatly enhancing the efficiency and convenience of data analysis. This strategy not only lowers the threshold for using large models but also significantly improves the accuracy and efficiency of data analysis. Our approach includes requirements for the precision of natural language descriptions, coverage of diverse data analysis needs, and mechanisms for immediate feedback and adjustment. Experimental results show that with this prompt design strategy, large language models perform exceptionally well in multiple data analysis tasks, generating high-quality code and significantly shortening the data analysis cycle. This method provides an efficient and convenient tool for the data analysis field and demonstrates the enormous potential of large language models in practical applications.

Keywords: large language models, prompt design, data analysis, code generation

Procedia PDF Downloads 37
4042 Improved Qualitative Modeling of the Magnetization Curve B(H) of the Ferromagnetic Materials for a Transformer Used in the Power Supply for Magnetron

Authors: M. Bassoui, M. Ferfra, M. Chrayagne

Abstract:

This paper presents a qualitative modeling for the nonlinear B-H curve of the saturable magnetic materials for a transformer with shunts used in the power supply for the magnetron. This power supply is composed of a single phase leakage flux transformer supplying a cell composed of a capacitor and a diode, which double the voltage and stabilize the current, and a single magnetron at the output of the cell. A procedure consisting of a fuzzy clustering method and a rule processing algorithm is then employed for processing the constructed fuzzy modeling rules to extract the qualitative properties of the curve.

Keywords: B(H) curve, fuzzy clustering, magnetron, power supply

Procedia PDF Downloads 235
4041 Developing an Exhaustive and Objective Definition of Social Enterprise through Computer Aided Text Analysis

Authors: Deepika Verma, Runa Sarkar

Abstract:

One of the prominent debates in the social entrepreneurship literature has been to establish whether entrepreneurial work for social well-being by for-profit organizations can be classified as social entrepreneurship or not. Of late, the scholarship has reached a consensus. It concludes that there seems little sense in confining social entrepreneurship to just non-profit organizations. Boosted by this research, increasingly a lot of businesses engaged in filling the social infrastructure gaps in developing countries are calling themselves social enterprise. These organizations are diverse in their ownership, size, objectives, operations and business models. The lack of a comprehensive definition of social enterprise leads to three issues. Firstly, researchers may face difficulty in creating a database for social enterprises because the choice of an entity as a social enterprise becomes subjective or based on some pre-defined parameters by the researcher which is not replicable. Secondly, practitioners who use ‘social enterprise’ in their vision/mission statement(s) may find it difficult to adjust their business models accordingly especially during the times when they face the dilemma of choosing social well-being over business viability. Thirdly, social enterprise and social entrepreneurship attract a lot of donor funding and venture capital. In the paucity of a comprehensive definitional guide, the donors or investors may find assigning grants and investments difficult. It becomes necessary to develop an exhaustive and objective definition of social enterprise and examine whether the understanding of the academicians and practitioners about social enterprise match. This paper develops a dictionary of words often associated with social enterprise or (and) social entrepreneurship. It further compares two lexicographic definitions of social enterprise imputed from the abstracts of academic journal papers and trade publications extracted from the EBSCO database using the ‘tm’ package in R software.

Keywords: EBSCO database, lexicographic definition, social enterprise, text mining

Procedia PDF Downloads 393
4040 Evaluation of Simple, Effective and Affordable Processing Methods to Reduce Phytates in the Legume Seeds Used for Feed Formulations

Authors: N. A. Masevhe, M. Nemukula, S. S. Gololo, K. G. Kgosana

Abstract:

Background and Study Significance: Legume seeds are important in agriculture as they are used for feed formulations due to their nutrient-dense, low-cost, and easy accessibility. Although they are important sources of energy, proteins, carbohydrates, vitamins, and minerals, they contain abundant quantities of anti-nutritive factors that reduce the bioavailability of nutrients, digestibility of proteins, and mineral absorption in livestock. However, the removal of these factors is too costly as it requires expensive state-of-the-art techniques such as high pressure and thermal processing. Basic Methodologies: The aim of the study was to investigate cost-effective methods that can be used to reduce the inherent phytates as putative antinutrients in the legume seeds. The seeds of Arachis hypogaea, Pisum sativum and Vigna radiata L. were subjected to the single processing methods viz raw seeds plus dehulling (R+D), soaking plus dehulling (S+D), ordinary cooking plus dehulling (C+D), infusion plus dehulling (I+D), autoclave plus dehulling (A+D), microwave plus dehulling (M+D) and five combined methods (S+I+D; S+A+D; I+M+D; S+C+D; S+M+D). All the processed seeds were dried, ground into powder, extracted, and analyzed on a microplate reader to determine the percentage of phytates per dry mass of the legume seeds. Phytic acid was used as a positive control, and one-way ANOVA was used to determine the significant differences between the means of the processing methods at a threshold of 0.05. Major Findings: The results of the processing methods showed the percentage yield ranges of 39.1-96%, 67.4-88.8%, and 70.2-93.8% for V. radiata, A. hypogaea and P. sativum, respectively. Though the raw seeds contained the highest contents of phytates that ranged between 0.508 and 0.527%, as expected, the R+D resulted in a slightly lower phytate percentage range of 0.469-0.485%, while other processing methods resulted in phytate contents that were below 0.35%. The M+D and S+M+D methods showed low phytate percentage ranges of 0.276-0.296% and 0.272-0.294%, respectively, where the lowest percentage yield was determined in S+M+D of P. sativum. Furthermore, these results were found to be significantly different (p<0.05). Though phytates cause micronutrient deficits as they chelate important minerals such as calcium, zinc, iron, and magnesium, their reduction may enhance nutrient bioavailability since they cannot be digested by the ruminants. Concluding Statement: Despite the nutritive aspects of the processed legume seeds, which are still in progress, the M+D and S+M+D methods, which significantly reduced the phytates in the investigated legume seeds, may be recommended to the local farmers and feed-producing industries so as to enhance animal health and production at an affordable cost.

Keywords: anti-nutritive factors, extraction, legume seeds, phytate

Procedia PDF Downloads 26
4039 A Self Beheld the Eyes of the Other: Reflections on Montesquieu's Persian Letters

Authors: Seyed Majid Alavi Shooshtari

Abstract:

As a multi-layered prose piece of artistry and craftsmanship Charles de Secondat, baron de Montesquieu’s Persian Letters (1721) is a satirical work which records the experiences of two Persian noblemen, Usbek and Rica, traveling through France in the early eighteenth century. Montesquieu creates Persian Letters as a critique of the French society, a critical explanation of what was considered to be 'the Orient' in the period, and an invaluable historical document which illustrates the ways Europe and the East understood each other in the first half of the eighteenth century. However, Persian Letters is considered today, in part, an Orientalist text because of it presenting the culture of the East using stereotypical images. Although, when Montesquieu published Persian Letters, the term Orientalist was a harmless word for people who studied or took an interest in it, the ways in which this Western intellectual author exerts his critique of French social and political life through the eyes of Persian protagonists by placing the example of the Orient (the Other) at the service of an ongoing Eighteen century discourse does raise some Eastern eyebrows. The fact that Persian side of the novel is considered by some critics as a fanciful decor, and the letters sent home are seen as literary props, yet these Eastern men intelligently question the rationality of religious, state, military and cultural practices and uncover much of the absurdity, irrationality or frivolity of European life. By drawing on the insight that Montesquieu’s text problematizes the assumption that orientalism monolithically constructs the Orient as the Other, the present paper aims to examine how the innocent gaze of two Eastern travelers mirrors the ways Europe’s identity defines its-Self.

Keywords: montesquieu, persian letters, ‘the orint’, identity politics, self, the other

Procedia PDF Downloads 108
4038 Interaction between Cognitive Control and Language Processing in Non-Fluent Aphasia

Authors: Izabella Szollosi, Klara Marton

Abstract:

Aphasia can be defined as a weakness in accessing linguistic information. Accessing linguistic information is strongly related to information processing, which in turn is associated with the cognitive control system. According to the literature, a deficit in the cognitive control system interferes with language processing and contributes to non-fluent speech performance. The aim of our study was to explore this hypothesis by investigating how cognitive control interacts with language performance in participants with non-fluent aphasia. Cognitive control is a complex construct that includes working memory (WM) and the ability to resist proactive interference (PI). Based on previous research, we hypothesized that impairments in domain-general (DG) cognitive control abilities have negative effects on language processing. In contrast, better DG cognitive control functioning supports goal-directed behavior in language-related processes as well. Since stroke itself might slow down information processing, it is important to examine its negative effects on both cognitive control and language processing. Participants (N=52) in our study were individuals with non-fluent Broca’s aphasia (N = 13), with transcortical motor aphasia (N=13), individuals with stroke damage without aphasia (N=13), and unimpaired speakers (N = 13). All participants performed various computer-based tasks targeting cognitive control functions such as WM and resistance to PI in both linguistic and non-linguistic domains. Non-linguistic tasks targeted primarily DG functions, while linguistic tasks targeted more domain specific (DS) processes. The results showed that participants with Broca’s aphasia differed from the other three groups in the non-linguistic tasks. They performed significantly worse even in the baseline conditions. In contrast, we found a different performance profile in the linguistic domain, where the control group differed from all three stroke-related groups. The three groups with impairment performed more poorly than the controls but similar to each other in the verbal baseline condition. In the more complex verbal PI condition, however, participants with Broca’s aphasia performed significantly worse than all the other groups. Participants with Broca’s aphasia demonstrated the most severe language impairment and the highest vulnerability in tasks measuring DG cognitive control functions. Results support the notion that the more severe the cognitive control impairment, the more severe the aphasia. Thus, our findings suggest a strong interaction between cognitive control and language. Individuals with the most severe and most general cognitive control deficit - participants with Broca’s aphasia - showed the most severe language impairment. Individuals with better DG cognitive control functions demonstrated better language performance. While all participants with stroke damage showed impaired cognitive control functions in the linguistic domain, participants with better language skills performed also better in tasks that measured non-linguistic cognitive control functions. The overall results indicate that the level of cognitive control deficit interacts with the language functions in individuals along with the language spectrum (from severe to no impairment). However, future research is needed to determine any directionality.

Keywords: cognitive control, information processing, language performance, non-fluent aphasia

Procedia PDF Downloads 120
4037 Differences in Assessing Hand-Written and Typed Student Exams: A Corpus-Linguistic Study

Authors: Jutta Ransmayr

Abstract:

The digital age has long arrived at Austrian schools, so both society and educationalists demand that digital means should be integrated accordingly to day-to-day school routines. Therefore, the Austrian school-leaving exam (A-levels) can now be written either by hand or by using a computer. However, the choice of writing medium (pen and paper or computer) for written examination papers, which are considered 'high-stakes' exams, raises a number of questions that have not yet been adequately investigated and answered until recently, such as: What effects do the different conditions of text production in the written German A-levels have on the component of normative linguistic accuracy? How do the spelling skills of German A-level papers written with a pen differ from those that the students wrote on the computer? And how is the teacher's assessment related to this? Which practical desiderata for German didactics can be derived from this? In a trilateral pilot project of the Austrian Center for Digital Humanities (ACDH) of the Austrian Academy of Sciences and the University of Vienna in cooperation with the Austrian Ministry of Education and the Council for German Orthography, these questions were investigated. A representative Austrian learner corpus, consisting of around 530 German A-level papers from all over Austria (pen and computer written), was set up in order to subject it to a quantitative (corpus-linguistic and statistical) and qualitative investigation with regard to the spelling and punctuation performance of the high school graduates and the differences between pen- and computer-written papers and their assessments. Relevant studies are currently available mainly from the Anglophone world. These have shown that writing on the computer increases the motivation to write, has positive effects on the length of the text, and, in some cases, also on the quality of the text. Depending on the writing situation and other technical aids, better results in terms of spelling and punctuation could also be found in the computer-written texts as compared to the handwritten ones. Studies also point towards a tendency among teachers to rate handwritten texts better than computer-written texts. In this paper, the first comparable results from the German-speaking area are to be presented. Research results have shown that, on the one hand, there are significant differences between handwritten and computer-written work with regard to performance in orthography and punctuation. On the other hand, the corpus linguistic investigation and the subsequent statistical analysis made it clear that not only the teachers' assessments of the students’ spelling performance vary enormously but also the overall assessments of the exam papers – the factor of the production medium (pen and paper or computer) also seems to play a decisive role.

Keywords: exam paper assessment, pen and paper or computer, learner corpora, linguistics

Procedia PDF Downloads 168
4036 Hydrometallurgical Production of Nickel Ores from Field Bugetkol

Authors: A. T. Zhakiyenova, E. E. Zhatkanbaev, Zh. K. Zhatkanbaeva

Abstract:

Nickel plays an important role in mechanical engineering and creation of military equipment; practically all steel are alloyed by nickel and other metals for receiving more durable, heat-resistant, corrosion-resistant steel and cast iron. There are many ways of processing of nickel in the world. Generally, it is igneous metallurgy methods. In this article, the review of majority existing ways of technologies of processing silicate nickel - cobalt ores is considered. Leaching of ores of a field Bugetkol is investigated by solution of sulfuric acid. We defined a specific consumption of sulfuric acid in relation to the mass of ore and to the mass of metal.

Keywords: cobalt, degree of extraction, hydrometallurgy, igneous metallurgy, leaching, matte, nickel

Procedia PDF Downloads 382
4035 Allostatic Load as a Predictor of Adolescents’ Executive Function: A Longitudinal Network Analysis

Authors: Sipu Guo, Silin Huang

Abstract:

Background: Most studies investigate the link between executive function and allostatic load (AL) among adults aged 18 years and older. Studies differed regarding the specific biological indicators studied and executive functions accounted for. Specific executive functions may be differentially related to allostatic load. We investigated the comorbidities of executive functions and allostatic load via network analysis. Methods: We included 603 adolescents (49.84% girls; Mean age = 12.38, SD age = 1.79) from junior high school in rural China. Eight biological markers at T1 and four executive function tasks at T2 were used to evaluate networks. Network analysis was used to determine the network structure, core symptoms, and bridge symptoms in the AL-executive function network among rural adolescents. Results: The executive functions were related to 6 AL biological markers, not to cortisol and epinephrine. The most influential symptoms were inhibition control, cognitive flexibility, processing speed, and systolic blood pressure (SBP). SBP, dehydroepiandrosterone, and processing speed were the bridges through which AL was related to executive functions. dehydroepiandrosterone strongly predicted processing speed. The SBP was the biggest influencer in the entire network. Conclusions: We found evidence for differential relations between markers and executive functions. SBP was a driver in the network; dehydroepiandrosterone showed strong relations with executive function.

Keywords: allostatic load, executive function, network analysis, rural adolescent

Procedia PDF Downloads 50
4034 Robust Barcode Detection with Synthetic-to-Real Data Augmentation

Authors: Xiaoyan Dai, Hsieh Yisan

Abstract:

Barcode processing of captured images is a huge challenge, as different shooting conditions can result in different barcode appearances. This paper proposes a deep learning-based barcode detection using synthetic-to-real data augmentation. We first augment barcodes themselves; we then augment images containing the barcodes to generate a large variety of data that is close to the actual shooting environments. Comparisons with previous works and evaluations with our original data show that this approach achieves state-of-the-art performance in various real images. In addition, the system uses hybrid resolution for barcode “scan” and is applicable to real-time applications.

Keywords: barcode detection, data augmentation, deep learning, image-based processing

Procedia PDF Downloads 167
4033 Increased Energy Efficiency and Improved Product Quality in Processing of Lithium Bearing Ores by Applying Fluidized-Bed Calcination Systems

Authors: Edgar Gasafi, Robert Pardemann, Linus Perander

Abstract:

For the production of lithium carbonate or hydroxide out of lithium bearing ores, a thermal activation (calcination/decrepitation) is required for the phase transition in the mineral to enable an acid respectively soda leaching in the downstream hydrometallurgical section. In this paper, traditional processing in Lithium industry is reviewed, and opportunities to reduce energy consumption and improve product quality and recovery rate will be discussed. The conventional process approach is still based on rotary kiln calcination, a technology in use since the early days of lithium ore processing, albeit not significantly further developed since. A new technology, at least for the Lithium industry, is fluidized bed calcination. Decrepitation of lithium ore was investigated at Outotec’s Frankfurt Research Centre. Focusing on fluidized bed technology, a study of major process parameters (temperature and residence time) was performed at laboratory and larger bench scale aiming for optimal product quality for subsequent processing. The technical feasibility was confirmed for optimal process conditions on pilot scale (400 kg/h feed input) providing the basis for industrial process design. Based on experimental results, a comprehensive Aspen Plus flow sheet simulation was developed to quantify mass and energy flow for the rotary kiln and fluidized bed system. Results show a significant reduction in energy consumption and improved process performance in terms of temperature profile, product quality and plant footprint. The major conclusion is that a substantial reduction of energy consumption can be achieved in processing Lithium bearing ores by using fluidized bed based systems. At the same time and different from rotary kiln process, an accurate temperature and residence time control is ensured in fluidized-bed systems leading to a homogenous temperature profile in the reactor which prevents overheating and sintering of the solids and results in uniform product quality.

Keywords: calcination, decrepitation, fluidized bed, lithium, spodumene

Procedia PDF Downloads 228
4032 Optimizing Parallel Computing Systems: A Java-Based Approach to Modeling and Performance Analysis

Authors: Maher Ali Rusho, Sudipta Halder

Abstract:

The purpose of the study is to develop optimal solutions for models of parallel computing systems using the Java language. During the study, programmes were written for the examined models of parallel computing systems. The result of the parallel sorting code is the output of a sorted array of random numbers. When processing data in parallel, the time spent on processing and the first elements of the list of squared numbers are displayed. When processing requests asynchronously, processing completion messages are displayed for each task with a slight delay. The main results include the development of optimisation methods for algorithms and processes, such as the division of tasks into subtasks, the use of non-blocking algorithms, effective memory management, and load balancing, as well as the construction of diagrams and comparison of these methods by characteristics, including descriptions, implementation examples, and advantages. In addition, various specialised libraries were analysed to improve the performance and scalability of the models. The results of the work performed showed a substantial improvement in response time, bandwidth, and resource efficiency in parallel computing systems. Scalability and load analysis assessments were conducted, demonstrating how the system responds to an increase in data volume or the number of threads. Profiling tools were used to analyse performance in detail and identify bottlenecks in models, which improved the architecture and implementation of parallel computing systems. The obtained results emphasise the importance of choosing the right methods and tools for optimising parallel computing systems, which can substantially improve their performance and efficiency.

Keywords: algorithm optimisation, memory management, load balancing, performance profiling, asynchronous programming.

Procedia PDF Downloads 10
4031 The Effect of Object Presentation on Action Memory in School-Aged Children

Authors: Farzaneh Badinlou, Reza Kormi-Nouri, Monika Knopf

Abstract:

Enacted tasks are typically remembered better than when the same task materials are only verbally encoded, a robust finding referred to as the enactment effect. It has been assumed that enactment effect is independent of object presence but the size of enactment effect can be increased by providing objects at study phase in adults. To clarify the issues in children, free recall and cued recall performance of action phrases with or without using real objects were compared in 410 school-aged children from four age groups (8, 10, 12 and 14 years old). In this study, subjects were instructed to learn a series of action phrases under three encoding conditions, participants listened to verbal action phrases (VTs), performed the phrases (SPTs: subject-performed tasks), and observed the experimenter perform the phrases (EPTs: experimenter-performed tasks). Then, free recall and cued recall memory tests were administrated. The results revealed that the real object compared with imaginary objects improved recall performance in SPTs and EPTs, but more so in VTs. It was also found that the object presence was not necessary for the occurrence of the enactment effect but it was changed the size of enactment effect in all age groups. The size of enactment effect was more pronounced for imaginary objects than the real object in both free recall and cued recall memory tests in children. It was discussed that SPTs and EPTs deferentially facilitate item-specific and relation information processing and providing the objects can moderate the processing underlying the encoding conditions.

Keywords: action memory, enactment effect, item-specific processing, object, relational processing, school-aged children

Procedia PDF Downloads 238
4030 Another Beautiful Sounds: Building the Memory of Sound of Peddling in Beijing with Digital Technology

Authors: Dan Wang, Qing Ma, Xiaodan Wang, Tianjiao Qi

Abstract:

The sound of peddling in Beijing, also called “yo-heave-ho” or “cry of one's ware”, is a unique folk culture and usually found in Beijing hutong. For the civilians in Beijing, sound of peddling is part of their childhood. And for those who love the traditional culture of Beijing, it is an old song singing the local conditions and customs of the ancient city. For example, because of his great appreciation, the British poet Osbert Stewart once put sound of peddling which he had heard in Beijing as a street orchestra performance in the article named "Beijing's sound and color".This research aims to collect and integrate the voice/photo resources and historical materials of sound concerning peddling in Beijing by digital technology in order to protect the intangible cultural heritage and pass on the city memory. With the goal in mind, the next stage is to collect and record all the materials and resources based on the historical documents study and interviews with civilians or performers. Then set up a metadata scheme (which refers to the domestic and international standards such as "Audio Data Processing Standards in the National Library", DC, VRA, and CDWA, etc.) to describe, process and organize the sound of peddling into a database. In order to fully show the traditional culture of sound of peddling in Beijing, web design and GIS technology are utilized to establish a website and plan holding offline exhibitions and events for people to simulate and learn the sound of peddling by using VR/AR technology. All resources are opened to the public and civilians can share the digital memory through not only the offline experiential activities, but also the online interaction. With all the attempts, a multi-media narrative platform has been established to multi-dimensionally record the sound of peddling in old Beijing with text, images, audio, video and so on.

Keywords: sound of peddling, GIS, metadata scheme, VR/AR technology

Procedia PDF Downloads 304
4029 A Comparative Study on the Use of Learning Resources in Learning Biochemistry by MBBS Students at Ras Al Khaimah Medical and Health Sciences University, UAE

Authors: B. K. Manjunatha Goud, Aruna Chanu Oinam

Abstract:

The undergraduate medical curriculum is oriented towards training the students to undertake the responsibilities of a physician. During the training period, adequate emphasis is placed on inculcating logical and scientific habits of thought; clarity of expression and independence of judgment; and ability to collect and analyze information and to correlate them. At Ras Al Khaimah Medical and Health Sciences University (RAKMHSU), Biochemistry a basic medical science subject is taught in the 1st year of 5 years medical course with vertical interdisciplinary interaction with all subjects, which needs to be taught and learned adequately by the students to be related to clinical case or clinical problem in medicine and future diagnostics so that they can practice confidently and skillfully in the community. Based on these facts study was done to know the extent of usage of library resources by the students and the impact of study materials on their preparation for examination. It was a comparative cross sectional study included 100 and 80 1st and 2nd-year students who had successfully completed Biochemistry course. The purpose of the study was explained to all students [participants]. Information was collected on a pre-designed, pre-tested and self-administered questionnaire. The questionnaire was validated by the senior faculties and pre tested on students who were not involved in the study. The study results showed that 80.30% and 93.15% of 1st and 2nd year students have the clear idea of course outline given in course handout or study guide. We also found a statistically significant number of students agreed that they were benefited from the practical session and writing notes in the class hour. A high percentage of students [50% and 62.02%] disagreed that that reading only the handouts is enough for their examination as compared to other students. The study also showed that only 35% and 41% of students visited the library on daily basis for the learning process, around 65% of students were using lecture notes and text books as a tool for learning and to understand the subject and 45% and 53% of students used the library resources (recommended text books) compared to online sources before the examinations. The results presented here show that students perceived that e-learning resources like power point presentations along with text book reading using SQ4R technique had made a positive impact on various aspects of their learning in Biochemistry. The use of library by students has overall positive impact on learning process especially in medical field enhances the outcome, and medical students are better equipped to treat the patient. But it’s also true that use of library use has been in decline which will impact the knowledge aspects and outcome. In conclusion, a student has to be taught how to use the library as learning tool apart from lecture handouts.

Keywords: medical education, learning resources, study guide, biochemistry

Procedia PDF Downloads 177
4028 Affective Transparency in Compound Word Processing

Authors: Jordan Gallant

Abstract:

In the compound word processing literature, much attention has been paid to the relationship between a compound’s denotational meaning and that of its morphological whole-word constituents, which is referred to as ‘semantic transparency’. However, the parallel relationship between a compound’s connotation and that of its constituents has not been addressed at all. For instance, while a compound like ‘painkiller’ might be semantically transparent, it is not ‘affectively transparent’. That is, both constituents have primarily negative connotations, while the whole compound has a positive one. This paper investigates the role of affective transparency on compound processing using two methodologies commonly employed in this field: a lexical decision task and a typing task. The critical stimuli used were 112 English bi-constituent compounds that differed in terms of the effective transparency of their constituents. Of these, 36 stimuli contained constituents with similar connotations to the compound (e.g., ‘dreamland’), 36 contained constituents with more positive connotations (e.g. ‘bedpan’), and 36 contained constituents with more negative connotations (e.g. ‘painkiller’). Connotation of whole-word constituents and compounds were operationalized via valence ratings taken from an off-line ratings database. In Experiment 1, compound stimuli and matched non-word controls were presented visually to participants, who were then asked to indicate whether it was a real word in English. Response times and accuracy were recorded. In Experiment 2, participants typed compound stimuli presented to them visually. Individual keystroke response times and typing accuracy were recorded. The results of both experiments provided positive evidence that compound processing is influenced by effective transparency. In Experiment 1, compounds in which both constituents had more negative connotations than the compound itself were responded to significantly more slowly than compounds in which the constituents had similar or more positive connotations. Typed responses from Experiment 2 showed that inter-keystroke intervals at the morphological constituent boundary were significantly longer when the connotation of the head constituent was either more positive or more negative than that of the compound. The interpretation of this finding is discussed in the context of previous compound typing research. Taken together, these findings suggest that affective transparency plays a role in the recognition, storage, and production of English compound words. This study provides a promising first step in a new direction for research on compound words.

Keywords: compound processing, semantic transparency, typed production, valence

Procedia PDF Downloads 125
4027 Reliability of Intra-Logistics Systems – Simulating Performance Availability

Authors: Steffen Schieweck, Johannes Dregger, Sascha Kaczmarek, Michael ten Hompel

Abstract:

Logistics distributors face the issue of having to provide increasing service levels while being forced to reduce costs at the same time. Same-day delivery, quick order processing and rapidly growing ranges of articles are only some of the prevailing challenges. One key aspect of the performance of an intra-logistics system is how often and in which amplitude congestions and dysfunctions affect the processing operations. By gaining knowledge of the so called ‘performance availability’ of such a system during the planning stage, oversizing and wasting can be reduced whereas planning transparency is increased. State of the art for the determination of this KPI are simulation studies. However, their structure and therefore their results may vary unforeseeably. This article proposes a concept for the establishment of ‘certified’ and hence reliable and comparable simulation models.

Keywords: intra-logistics, performance availability, simulation, warehousing

Procedia PDF Downloads 453
4026 Advances of Image Processing in Precision Agriculture: Using Deep Learning Convolution Neural Network for Soil Nutrient Classification

Authors: Halimatu S. Abdullahi, Ray E. Sheriff, Fatima Mahieddine

Abstract:

Agriculture is essential to the continuous existence of human life as they directly depend on it for the production of food. The exponential rise in population calls for a rapid increase in food with the application of technology to reduce the laborious work and maximize production. Technology can aid/improve agriculture in several ways through pre-planning and post-harvest by the use of computer vision technology through image processing to determine the soil nutrient composition, right amount, right time, right place application of farm input resources like fertilizers, herbicides, water, weed detection, early detection of pest and diseases etc. This is precision agriculture which is thought to be solution required to achieve our goals. There has been significant improvement in the area of image processing and data processing which has being a major challenge. A database of images is collected through remote sensing, analyzed and a model is developed to determine the right treatment plans for different crop types and different regions. Features of images from vegetations need to be extracted, classified, segmented and finally fed into the model. Different techniques have been applied to the processes from the use of neural network, support vector machine, fuzzy logic approach and recently, the most effective approach generating excellent results using the deep learning approach of convolution neural network for image classifications. Deep Convolution neural network is used to determine soil nutrients required in a plantation for maximum production. The experimental results on the developed model yielded results with an average accuracy of 99.58%.

Keywords: convolution, feature extraction, image analysis, validation, precision agriculture

Procedia PDF Downloads 313
4025 Enhancement of Mechanical and Dissolution Properties of a Cast Magnesium Alloy via Equal Angular Channel Processing

Authors: Tim Dunne, Jiaxiang Ren, Lei Zhao, Peng Cheng, Yi Song, Yu Liu, Wenhan Yue, Xiongwen Yang

Abstract:

Two decades of the Shale Revolution has transforming transformed the global energy market, in part by the adaption of multi-stage dissolvable frac plugs. Magnesium has been favored for the bulk of plugs, requiring development of materials to suit specific field requirements. Herein, the mechanical and dissolution results from equal channel angular pressing (ECAP) of two cast dissolvable magnesium alloy are described. ECAP was selected as a route to increase the mechanical properties of two formulations of dissolvable magnesium, as solutionizing failed. In this study, 1” square cross section samples cast Mg alloys formulations containing rare earth were processed at temperatures ranging from 200 to 350 °C, at a rate of 0.005”/s, with a backpressure from 0 to 70 MPa, in a brass, or brass + graphite sheet. Generally, the yield and ultimate tensile strength (UTS) doubled for all. For formulation DM-2, the yield increased from 100 MPa to 250 MPa; UTS from 175 MPa to 325 MPa, but the strain fell from 2 to 1%. Formulation DM-3 yield increased from 75 MPa to 200 MPa, UTS from 150 MPa to 275 MPa, with strain increasing from 1 to 3%. Meanwhile, ECAP has also been found to reduce the dissolution rate significantly. A microstructural analysis showed grain refinement of the alloy and the movement of secondary phases away from the grain boundary. It is believed that reconfiguration of the grain boundary phases increased the mechanical properties and decreased the dissolution rate. ECAP processing of dissolvable high rare earth content magnesium is possible despite the brittleness of the material. ECAP is a possible processing route to increase mechanical properties for dissolvable aluminum alloys that do not extrude.

Keywords: equal channel angular processing, dissolvable magnesium, frac plug, mechanical properties

Procedia PDF Downloads 115
4024 Paddy/Rice Singulation for Determination of Husking Efficiency and Damage Using Machine Vision

Authors: M. Shaker, S. Minaei, M. H. Khoshtaghaza, A. Banakar, A. Jafari

Abstract:

In this study a system of machine vision and singulation was developed to separate paddy from rice and determine paddy husking and rice breakage percentages. The machine vision system consists of three main components including an imaging chamber, a digital camera, a computer equipped with image processing software. The singulation device consists of a kernel holding surface, a motor with vacuum fan, and a dimmer. For separation of paddy from rice (in the image), it was necessary to set a threshold. Therefore, some images of paddy and rice were sampled and the RGB values of the images were extracted using MATLAB software. Then mean and standard deviation of the data were determined. An Image processing algorithm was developed using MATLAB to determine paddy/rice separation and rice breakage and paddy husking percentages, using blue to red ratio. Tests showed that, a threshold of 0.75 is suitable for separating paddy from rice kernels. Results from the evaluation of the image processing algorithm showed that the accuracies obtained with the algorithm were 98.36% and 91.81% for paddy husking and rice breakage percentage, respectively. Analysis also showed that a suction of 45 mmHg to 50 mmHg yielding 81.3% separation efficiency is appropriate for operation of the kernel singulation system.

Keywords: breakage, computer vision, husking, rice kernel

Procedia PDF Downloads 379
4023 Linguistic Analysis of Borderline Personality Disorder: Using Language to Predict Maladaptive Thoughts and Behaviours

Authors: Charlotte Entwistle, Ryan Boyd

Abstract:

Recent developments in information retrieval techniques and natural language processing have allowed for greater exploration of psychological and social processes. Linguistic analysis methods for understanding behaviour have provided useful insights within the field of mental health. One area within mental health that has received little attention though, is borderline personality disorder (BPD). BPD is a common mental health disorder characterised by instability of interpersonal relationships, self-image and affect. It also manifests through maladaptive behaviours, such as impulsivity and self-harm. Examination of language patterns associated with BPD could allow for a greater understanding of the disorder and its links to maladaptive thoughts and behaviours. Language analysis methods could also be used in a predictive way, such as by identifying indicators of BPD or predicting maladaptive thoughts, emotions and behaviours. Additionally, associations that are uncovered between language and maladaptive thoughts and behaviours could then be applied at a more general level. This study explores linguistic characteristics of BPD, and their links to maladaptive thoughts and behaviours, through the analysis of social media data. Data were collected from a large corpus of posts from the publicly available social media platform Reddit, namely, from the ‘r/BPD’ subreddit whereby people identify as having BPD. Data were collected using the Python Reddit API Wrapper and included all users which had posted within the BPD subreddit. All posts were manually inspected to ensure that they were not posted by someone who clearly did not have BPD, such as people posting about a loved one with BPD. These users were then tracked across all other subreddits of which they had posted in and data from these subreddits were also collected. Additionally, data were collected from a random control group of Reddit users. Disorder-relevant behaviours, such as self-harming or aggression-related behaviours, outlined within Reddit posts were coded to by expert raters. All posts and comments were aggregated by user and split by subreddit. Language data were then analysed using the Linguistic Inquiry and Word Count (LIWC) 2015 software. LIWC is a text analysis program that identifies and categorises words based on linguistic and paralinguistic dimensions, psychological constructs and personal concern categories. Statistical analyses of linguistic features could then be conducted. Findings revealed distinct linguistic features associated with BPD, based on Reddit posts, which differentiated these users from a control group. Language patterns were also found to be associated with the occurrence of maladaptive thoughts and behaviours. Thus, this study demonstrates that there are indeed linguistic markers of BPD present on social media. It also implies that language could be predictive of maladaptive thoughts and behaviours associated with BPD. These findings are of importance as they suggest potential for clinical interventions to be provided based on the language of people with BPD to try to reduce the likelihood of maladaptive thoughts and behaviours occurring. For example, by social media tracking or engaging people with BPD in expressive writing therapy. Overall, this study has provided a greater understanding of the disorder and how it manifests through language and behaviour.

Keywords: behaviour analysis, borderline personality disorder, natural language processing, social media data

Procedia PDF Downloads 349
4022 Detection of Intentional Attacks in Images Based on Watermarking

Authors: Hazem Munawer Al-Otum

Abstract:

In this work, an efficient watermarking technique is proposed and can be used for detecting intentional attacks in RGB color images. The proposed technique can be implemented for image authentication and exhibits high robustness against unintentional common image processing attacks. It deploys two measures to discern between intentional and unintentional attacks based on using a quantization-based technique in a modified 2D multi-pyramidal DWT transform. Simulations have shown high accuracy in detecting intentionally attacked regions while exhibiting high robustness under moderate to severe common image processing attacks.

Keywords: image authentication, copyright protection, semi-fragile watermarking, tamper detection

Procedia PDF Downloads 255
4021 Control the Flow of Big Data

Authors: Shizra Waris, Saleem Akhtar

Abstract:

Big data is a research area receiving attention from academia and IT communities. In the digital world, the amounts of data produced and stored have within a short period of time. Consequently this fast increasing rate of data has created many challenges. In this paper, we use functionalism and structuralism paradigms to analyze the genesis of big data applications and its current trends. This paper presents a complete discussion on state-of-the-art big data technologies based on group and stream data processing. Moreover, strengths and weaknesses of these technologies are analyzed. This study also covers big data analytics techniques, processing methods, some reported case studies from different vendor, several open research challenges and the chances brought about by big data. The similarities and differences of these techniques and technologies based on important limitations are also investigated. Emerging technologies are suggested as a solution for big data problems.

Keywords: computer, it community, industry, big data

Procedia PDF Downloads 192
4020 Design and Implementation of a Counting and Differentiation System for Vehicles through Video Processing

Authors: Derlis Gregor, Kevin Cikel, Mario Arzamendia, Raúl Gregor

Abstract:

This paper presents a self-sustaining mobile system for counting and classification of vehicles through processing video. It proposes a counting and classification algorithm divided in four steps that can be executed multiple times in parallel in a SBC (Single Board Computer), like the Raspberry Pi 2, in such a way that it can be implemented in real time. The first step of the proposed algorithm limits the zone of the image that it will be processed. The second step performs the detection of the mobile objects using a BGS (Background Subtraction) algorithm based on the GMM (Gaussian Mixture Model), as well as a shadow removal algorithm using physical-based features, followed by morphological operations. In the first step the vehicle detection will be performed by using edge detection algorithms and the vehicle following through Kalman filters. The last step of the proposed algorithm registers the vehicle passing and performs their classification according to their areas. An auto-sustainable system is proposed, powered by batteries and photovoltaic solar panels, and the data transmission is done through GPRS (General Packet Radio Service)eliminating the need of using external cable, which will facilitate it deployment and translation to any location where it could operate. The self-sustaining trailer will allow the counting and classification of vehicles in specific zones with difficult access.

Keywords: intelligent transportation system, object detection, vehicle couting, vehicle classification, video processing

Procedia PDF Downloads 319
4019 Tool Wear Monitoring of High Speed Milling Based on Vibratory Signal Processing

Authors: Hadjadj Abdechafik, Kious Mecheri, Ameur Aissa

Abstract:

The objective of this study is to develop a process of treatment of the vibratory signals generated during a horizontal high speed milling process without applying any coolant in order to establish a monitoring system able to improve the machining performance. Thus, many tests were carried out on the horizontal high speed centre (PCI Météor 10), in given cutting conditions, by using a milling cutter with only one insert and measured its frontal wear from its new state that is considered as a reference state until a worn state that is considered as unsuitable for the tool to be used. The results obtained show that the first harmonic follow well the evolution of frontal wear, on another hand a wavelet transform is used for signal processing and is found to be useful for observing the evolution of the wavelet approximations through the cutting tool life. The power and the Root Mean Square (RMS) values of the wavelet transformed signal gave the best results and can be used for tool wear estimation. All this features can constitute the suitable indicators for an effective detection of tool wear and then used for the input parameters of an online monitoring system. Although we noted the remarkable influence of the machining cycle on the quality of measurements by the introduction of a bias on the signal, this phenomenon appears in particular in horizontal milling and in the majority of studies is ignored.

Keywords: flank wear, vibration, milling, signal processing, monitoring

Procedia PDF Downloads 596