Search results for: computer processing of large databases
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12481

Search results for: computer processing of large databases

9511 A New Approach towards the Development of Next Generation CNC

Authors: Yusri Yusof, Kamran Latif

Abstract:

Computer Numeric Control (CNC) machine has been widely used in the industries since its inception. Currently, in CNC technology has been used for various operations like milling, drilling, packing and welding etc. with the rapid growth in the manufacturing world the demand of flexibility in the CNC machines has rapidly increased. Previously, the commercial CNC failed to provide flexibility because its structure was of closed nature that does not provide access to the inner features of CNC. Also CNC’s operating ISO data interface model was found to be limited. Therefore, to overcome that problem, Open Architecture Control (OAC) technology and STEP-NC data interface model are introduced. At present the Personal Computer (PC) has been the best platform for the development of open-CNC systems. In this paper, both ISO data interface model interpretation, its verification and execution has been highlighted with the introduction of the new techniques. The proposed is composed of ISO data interpretation, 3D simulation and machine motion control modules. The system is tested on an old 3 axis CNC milling machine. The results are found to be satisfactory in performance. This implementation has successfully enabled sustainable manufacturing environment.

Keywords: CNC, ISO 6983, ISO 14649, LabVIEW, open architecture control, reconfigurable manufacturing systems, sustainable manufacturing, Soft-CNC

Procedia PDF Downloads 499
9510 Structural Correlates of Reduced Malicious Pleasure in Huntington's Disease

Authors: Sandra Baez, Mariana Pino, Mildred Berrio, Hernando Santamaria-Garcia, Lucas Sedeno, Adolfo Garcia, Sol Fittipaldi, Agustin Ibanez

Abstract:

Schadenfreude refers to the perceiver’s experience of pleasure at another’s misfortune. This is a multidetermined emotion which can be evoked by hostile feelings and envy. The experience of Schadenfreude engages mechanisms implicated in diverse social cognitive processes. For instance, Schadenfreude involves heightened reward processing, accompanied by increased striatal engagement and it interacts with mentalizing and perspective-taking abilities. Patients with Huntington's disease (HD) exhibit reductions of Schadenfreude experience, suggesting a role of striatal degeneration in such an impairment. However, no study has directly assessed the relationship between regional brain atrophy in HD and reduced Schadenfreude. This study investigated whether gray matter (GM) atrophy in HD patients correlates with ratings of Schadenfreude. First, we compared the performance of 20 HD patients and 23 controls on an experimental task designed to trigger Schadenfreude and envy (another social emotion acting as a control condition). Second, we compared GM volume between groups. Third, we examined brain regions where atrophy might be associated with specific impairments in the patients. Results showed that while both groups showed similar ratings of envy, HD patients reported lower Schadenfreude. The latter pattern was related to atrophy in regions of the reward system (ventral striatum) and the mentalizing network (precuneus and superior parietal lobule). Our results shed light on the intertwining of reward and socioemotional processes in Schadenfreude, while offering novel evidence about their neural correlates. In addition, our results open the door to future studies investigating social emotion processing in other clinical populations characterized by striatal or mentalizing network impairments (e.g., Parkinson’s disease, schizophrenia, autism spectrum disorders).

Keywords: envy, Gray matter atrophy, Huntigton's disease, Schadenfreude, social emotions

Procedia PDF Downloads 320
9509 Evaluation of the Potential of Olive Pomace Compost for Using as a Soil Amendment

Authors: M. Černe, I. Palčić, D. Anđelini, D. Cvitan, N. Major, M. Lukić, S. Goreta Ban, D. Ban, T. Rijavec, A. Lapanje

Abstract:

Context: In the Mediterranean basin, large quantities of lignocellulosic by-products, such as olive pomace (OP), are generated during olive processing on an annual basis. Due to the phytotoxic nature of OP, composting is recommended for its stabilisation to produce the end-product safe for agricultural use. Research Aim: This study aims to evaluate the applicability of olive pomace compost (OPC) for use as a soil amendment by considering its physical and chemical characteristics and microbiological parameters. Methodology: The OPC samples were collected from the surface and depth layers of the compost pile after 8 months. The samples were analyzed for their C/N, pH, EC, total phenolic content, residual oils, and elemental content, as well as colloidal properties and microbial community structure. The specific analytical approaches used are detailed in the poster. Findings: The results showed that the pH of OPC ranged from 7.8 to 8.6, while the electrical conductivity was from 770 to 1608 mS/cm. The levels of nitrogen (N), phosphorus (P), and potassium (K) varied within the ranges of 1.5 to 27.2 g/kg d.w., 1.6 to 1.8 g/kg d.w., and 6.5 to 7.5 g/kg d.w., respectively. The contents of potentially toxic metals such as chromium (Cr), copper (Cu), nickel (Ni), lead (Pb), and zinc (Zn) were below the EU limits for soil improvers. The microbial structure follows the changes of the gradient from the outer to the innermost layer with relatively low amounts of DNA. The gradient nature shows that it is needed to develop better strategies for composting surpassing the conventional approach. However, the low amounts of total phenols and oil residues indicated efficient biodegradation during composting. The carbon-to-nitrogen ratio (C/N) within the range of 13 to 16 suggested that OPC can be used as a soil amendment. Overall, the study suggests that composting can be a promising strategy for environmentally-friendly OP recycling. Theoretical Importance: This study contributes to the understanding of the use of OPC as a soil amendment and its potential benefits in resource recycling and reducing environmental burdens. It also highlights the need for improved composting strategies to optimize its process. Data Collection and Analysis Procedures: The OPC samples were taken from the compost pile and charasterised for selected chemical, physical and microbial parameters. The specific analytical procedures utilized are described in detail in the poster. Question Addressed: This study addresses the question of whether composting can be optimized to improve the biodegradation of OP. Conclusion: The study concludes that OPC has the potential to be used as a soil amendment due to its favorable physical and chemical characteristics, low levels of potentially toxic metals, and efficient biodegradation during composting. However, the results also suggest the need for improved composting strategies to improve the quality of OPC.

Keywords: olive pomace compost, waste valorisation, agricultural use, soil amendment

Procedia PDF Downloads 57
9508 Emotiv EPOC BCI Matrix Speller Based on Single Emokey

Authors: S. M. Abdullah Al Mamun

Abstract:

Human Computer Interaction (HCI) is an excellent area for the researchers to make daily life more simple and fast. Necessary hardware equipments for any BCI are generally expensive and not affordable for most of the people. Emotiv is one of the solutions for this problem, which can provide electroencephalograph (EEG) signal and explain the brain activities. BCI virtual speller was one of the important applications for the people who have lost their hand or speaking ability because of diseases or unexpected accident. In this paper, a matrix speller has been designed for the first time for Bengali speaking people around the world. Bengali is one of the most commonly spoken languages. Among them, a lot of disabled person will be able to express their desire in their mother tongue. This application is also usable for the social networks and daily life communications. For this virtual keyboard, the well-known matrix speller method with column flashing is applied and controlled by single Emokey only. Emokey is a great feature which translates emotional state for application inputs. In this paper, it is presented that the ITR (Information Transfer Rate) were 29.4 bits/min and typing speed achieved up to 7.43 char/per min.

Keywords: brain computer interface, Emotiv EPOC, EEG, virtual keyboard, matrix speller

Procedia PDF Downloads 288
9507 Study of White Salted Noodles Air Dehydration Assisted by Microwave as Compared to Conventional Air Dried Process

Authors: Chiun-C. R. Wang, I-Yu Chiu

Abstract:

Drying is the most difficult and critical step to control in the dried salted noodles production. Microwave drying has the specific advantage of rapid and uniform heating due to the penetration of microwaves into the body of the product. Microwave-assisted facility offers a quick and energy saving method during food dehydration as compares to the conventional air-dried method for the noodle preparation. Recently, numerous studies in the rheological characteristics of pasta or spaghetti were carried out with microwave–assisted and conventional air driers and many agricultural products were dried successfully. There is very few research associated with the evaluation of physicochemical characteristics and cooking quality of microwave-assisted air dried salted noodles. The purposes of this study were to compare the difference between conventional air and microwave-assisted air drying method on the physicochemical properties and eating quality of rice bran noodles. Three different microwave power including 0.5 KW, 0.75 KW and 1.0 KW installing with 50℃ hot air were applied for dehydration of rice bran noodles in this study. Three proportion of rice bran ranging in 0-20% were incorporated into salted noodles processing. The appearance, optimum cooking time, cooking yield and losses, textural profiles analysis, and sensory evaluation of rice bran noodles were measured in this study. The results indicated that high power (1.0 KW) microwave facility caused partially burnt and porous on the surface of rice bran noodles. However, no significant difference of noodle was appeared on the surface of noodles between low power (0.5 KW) microwave-assisted salted noodles and control set. The optimum cooking time of noodles was decreased as higher power microwave was applied or higher proportion of rice bran was incorporated in the preparation of salted noodles. The higher proportion of rice bran (20%) or higher power of microwave-assisted dried noodles obtained the higher color intensity and the higher cooking losses as compared with conventional air dried noodles. Meanwhile, the higher power of microwave-assisted air dried noodles indicated the larger air cell inside the noodles and appeared little burnt stripe on the surface of noodles. The firmness of cooked rice bran noodles slightly decreased in the cooked noodles which were dried by high power microwave-assisted method. The shearing force, tensile strength, elasticity and texture profiles of cooked rice noodles decreased with the progress of the proportion of rice bran. The results of sensory evaluation indicated conventional dried noodles obtained the higher springiness, cohesiveness and overall acceptability of cooked noodles than high power (1.0 KW) microwave-assisted dried noodles. However, low power (0.5 KW) microwave-assisted dried noodles showed the comparable sensory attributes and acceptability with conventional dried noodles. Moreover, the sensory attributes including firmness, springiness, cohesiveness decreased, but stickiness increased with the increases of rice bran proportion in the salted noodles. These results inferred that incorporation of lower proportion of rice bran and lower power microwave-assisted dried noodles processing could produce faster cooking time and more acceptable quality of cooked noodles as compared to conventional dried noodles.

Keywords: white salted noodles, microwave-assisted air drying processing, cooking yield, appearance, texture profiles, scanning electrical microscopy, sensory evaluation

Procedia PDF Downloads 475
9506 Dynamic Web-Based 2D Medical Image Visualization and Processing Software

Authors: Abdelhalim. N. Mohammed, Mohammed. Y. Esmail

Abstract:

In the course of recent decades, medical imaging has been dominated by the use of costly film media for review and archival of medical investigation, however due to developments in networks technologies and common acceptance of a standard digital imaging and communication in medicine (DICOM) another approach in light of World Wide Web was produced. Web technologies successfully used in telemedicine applications, the combination of web technologies together with DICOM used to design a web-based and open source DICOM viewer. The Web server allowance to inquiry and recovery of images and the images viewed/manipulated inside a Web browser without need for any preinstalling software. The dynamic site page for medical images visualization and processing created by using JavaScript and HTML5 advancements. The XAMPP ‘apache server’ is used to create a local web server for testing and deployment of the dynamic site. The web-based viewer connected to multiples devices through local area network (LAN) to distribute the images inside healthcare facilities. The system offers a few focal points over ordinary picture archiving and communication systems (PACS): easy to introduce, maintain and independently platforms that allow images to display and manipulated efficiently, the system also user-friendly and easy to integrate with an existing system that have already been making use of web technologies. The wavelet-based image compression technique on which 2-D discrete wavelet transform used to decompose the image then wavelet coefficients are transmitted by entropy encoding after threshold to decrease transmission time, stockpiling cost and capacity. The performance of compression was estimated by using images quality metrics such as mean square error ‘MSE’, peak signal to noise ratio ‘PSNR’ and compression ratio ‘CR’ that achieved (83.86%) when ‘coif3’ wavelet filter is used.

Keywords: DICOM, discrete wavelet transform, PACS, HIS, LAN

Procedia PDF Downloads 145
9505 Study of Syntactic Errors for Deep Parsing at Machine Translation

Authors: Yukiko Sasaki Alam, Shahid Alam

Abstract:

Syntactic parsing is vital for semantic treatment by many applications related to natural language processing (NLP), because form and content coincide in many cases. However, it has not yet reached the levels of reliable performance. By manually examining and analyzing individual machine translation output errors that involve syntax as well as semantics, this study attempts to discover what is required for improving syntactic and semantic parsing.

Keywords: syntactic parsing, error analysis, machine translation, deep parsing

Procedia PDF Downloads 534
9504 Discovery of Exoplanets in Kepler Data Using a Graphics Processing Unit Fast Folding Method and a Deep Learning Model

Authors: Kevin Wang, Jian Ge, Yinan Zhao, Kevin Willis

Abstract:

Kepler has discovered over 4000 exoplanets and candidates. However, current transit planet detection techniques based on the wavelet analysis and the Box Least Squares (BLS) algorithm have limited sensitivity in detecting minor planets with a low signal-to-noise ratio (SNR) and long periods with only 3-4 repeated signals over the mission lifetime of 4 years. This paper presents a novel precise-period transit signal detection methodology based on a new Graphics Processing Unit (GPU) Fast Folding algorithm in conjunction with a Convolutional Neural Network (CNN) to detect low SNR and/or long-period transit planet signals. A comparison with BLS is conducted on both simulated light curves and real data, demonstrating that the new method has higher speed, sensitivity, and reliability. For instance, the new system can detect transits with SNR as low as three while the performance of BLS drops off quickly around SNR of 7. Meanwhile, the GPU Fast Folding method folds light curves 25 times faster than BLS, a significant gain that allows exoplanet detection to occur at unprecedented period precision. This new method has been tested with all known transit signals with 100% confirmation. In addition, this new method has been successfully applied to the Kepler of Interest (KOI) data and identified a few new Earth-sized Ultra-short period (USP) exoplanet candidates and habitable planet candidates. The results highlight the promise for GPU Fast Folding as a replacement to the traditional BLS algorithm for finding small and/or long-period habitable and Earth-sized planet candidates in-transit data taken with Kepler and other space transit missions such as TESS(Transiting Exoplanet Survey Satellite) and PLATO(PLAnetary Transits and Oscillations of stars).

Keywords: algorithms, astronomy data analysis, deep learning, exoplanet detection methods, small planets, habitable planets, transit photometry

Procedia PDF Downloads 205
9503 Crossing Multi-Source Climate Data to Estimate the Effects of Climate Change on Evapotranspiration Data: Application to the French Central Region

Authors: Bensaid A., Mostephaoui T., Nedjai R.

Abstract:

Climatic factors are the subject of considerable research, both methodologically and instrumentally. Under the effect of climate change, the approach to climate parameters with precision remains one of the main objectives of the scientific community. This is from the perspective of assessing climate change and its repercussions on humans and the environment. However, many regions of the world suffer from a severe lack of reliable instruments that can make up for this deficit. Alternatively, the use of empirical methods becomes the only way to assess certain parameters that can act as climate indicators. Several scientific methods are used for the evaluation of evapotranspiration which leads to its evaluation either directly at the level of the climatic stations or by empirical methods. All these methods make a point approach and, in no case, allow the spatial variation of this parameter. We, therefore, propose in this paper the use of three sources of information (network of weather stations of Meteo France, World Databases, and Moodis satellite images) to evaluate spatial evapotranspiration (ETP) using the Turc method. This first step will reflect the degree of relevance of the indirect (satellite) methods and their generalization to sites without stations. The spatial variation representation of this parameter using the geographical information system (GIS) accounts for the heterogeneity of the behaviour of this parameter. This heterogeneity is due to the influence of site morphological factors and will make it possible to appreciate the role of certain topographic and hydrological parameters. A phase of predicting the evolution over the medium and long term of evapotranspiration under the effect of climate change by the application of the Intergovernmental Panel on Climate Change (IPCC) scenarios gives a realistic overview as to the contribution of aquatic systems to the scale of the region.

Keywords: climate change, ETP, MODIS, GIEC scenarios

Procedia PDF Downloads 85
9502 Improvement of Water Quality of Al Asfar Lake Using Constructed Wetland System

Authors: Jamal Radaideh

Abstract:

Al-Asfar Lake is located about 14 km east of Al-Ahsa and is one of the most important wetland lakes in the Al Ahsa/Eastern Province of Saudi Arabia. Al-Ahsa is may be the largest oasis in the world, having an area of 20,000 hectares, in addition, it is of the largest and oldest agricultural centers in the region. The surplus farm irrigation water beside additional water supplied by treated wastewater from Al-Hofuf sewage station is collected by a drainage network and discharged into Al-Asfar Lake. The lake has good wetlands, sand dunes as well as large expanses of open and shallow water. Salt tolerant vegetation is present in some of the shallow areas around the lake, and huge stands of Phragmites reeds occur around the lake. The lake presents an important habitat for wildlife and birds, something not expected to find in a large desert. Although high evaporation rates in the range of 3250 mm are common, the water remains in the evaporation lakes during all seasons of the year is used to supply cattle with drinking water and for aquifer recharge. Investigations showed that high concentrations of nitrogen (N), phosphorus (P), biological oxygen demand (BOD), chemical oxygen demand (COD) and salinity discharge to Al Asfar Lake from the D2 drain exist. It is expected that the majority of BOD, COD and N originates from wastewater discharge and leachate from surplus irrigation water which also contribute to the majority of P and salinity. The significant content of nutrients and biological oxygen demand reduces available oxygen in the water. The present project aimed to improve the water quality of the lake using constructed wetland trains which will be built around the lake. Phragmites reeds, which already occur around the lake, will be used.

Keywords: Al Asfar lake, constructed wetland, water quality, water treatment

Procedia PDF Downloads 424
9501 Digitalization and High Audit Fees: An Empirical Study Applied to US Firms

Authors: Arpine Maghakyan

Abstract:

The purpose of this paper is to study the relationship between the level of industry digitalization and audit fees, especially, the relationship between Big 4 auditor fees and industry digitalization level. On the one hand, automation of business processes decreases internal control weakness and manual mistakes; increases work effectiveness and integrations. On the other hand, it may cause serious misstatements, high business risks or even bankruptcy, typically in early stages of automation. Incomplete automation can bring high audit risk especially if the auditor does not fully understand client’s business automation model. Higher audit risk consequently will cause higher audit fees. Higher audit fees for clients with high automation level are more highlighted in Big 4 auditor’s behavior. Using data of US firms from 2005-2015, we found that industry level digitalization is an interaction for the auditor quality on audit fees. Moreover, the choice of Big4 or non-Big4 is correlated with client’s industry digitalization level. Big4 client, which has higher digitalization level, pays more than one with low digitalization level. In addition, a high-digitalized firm that has Big 4 auditor pays higher audit fee than non-Big 4 client. We use audit fees and firm-specific variables from Audit Analytics and Compustat databases. We analyze collected data by using fixed effects regression methods and Wald tests for sensitivity check. We use fixed effects regression models for firms for determination of the connections between technology use in business and audit fees. We control for firm size, complexity, inherent risk, profitability and auditor quality. We chose fixed effects model as it makes possible to control for variables that have not or cannot be measured.

Keywords: audit fees, auditor quality, digitalization, Big4

Procedia PDF Downloads 286
9500 Image Classification with Localization Using Convolutional Neural Networks

Authors: Bhuyain Mobarok Hossain

Abstract:

Image classification and localization research is currently an important strategy in the field of computer vision. The evolution and advancement of deep learning and convolutional neural networks (CNN) have greatly improved the capabilities of object detection and image-based classification. Target detection is important to research in the field of computer vision, especially in video surveillance systems. To solve this problem, we will be applying a convolutional neural network of multiple scales at multiple locations in the image in one sliding window. Most translation networks move away from the bounding box around the area of interest. In contrast to this architecture, we consider the problem to be a classification problem where each pixel of the image is a separate section. Image classification is the method of predicting an individual category or specifying by a shoal of data points. Image classification is a part of the classification problem, including any labels throughout the image. The image can be classified as a day or night shot. Or, likewise, images of cars and motorbikes will be automatically placed in their collection. The deep learning of image classification generally includes convolutional layers; the invention of it is referred to as a convolutional neural network (CNN).

Keywords: image classification, object detection, localization, particle filter

Procedia PDF Downloads 285
9499 H.263 Based Video Transceiver for Wireless Camera System

Authors: Won-Ho Kim

Abstract:

In this paper, a design of H.263 based wireless video transceiver is presented for wireless camera system. It uses standard WIFI transceiver and the covering area is up to 100m. Furthermore the standard H.263 video encoding technique is used for video compression since wireless video transmitter is unable to transmit high capacity raw data in real time and the implemented system is capable of streaming at speed of less than 1Mbps using NTSC 720x480 video.

Keywords: wireless video transceiver, video surveillance camera, H.263 video encoding digital signal processing

Procedia PDF Downloads 352
9498 Factors Affecting the Caregiving Experience of Children with Parental Mental Illnesses: A Systematic Review

Authors: N. Anjana

Abstract:

Worldwide, the prevalence of mental illnesses is increasing. The issues of persons with mental illness and their caregivers have been well documented in the literature. However, data regarding the factors affecting the caregiving experience of children with parental mental illnesses is sparse. This systematic review aimed to examine the existing literature of the factors affecting the caregiving experience of children of parents with mental illnesses. A comprehensive search of databases such as PubMed, EBSCO, JSTOR, ProQuest Central, Taylor and Francis Online, and Google Scholar were performed to identify peer-reviewed papers examining the factors associated with caregiving experiences of children with parental mental illnesses such as schizophrenia and major depression, for the 10-year period ending November 2019. Two researchers screened studies for eligibility. One researcher extracted data from eligible studies while a second performed verification of results for accuracy and completeness. Quality appraisal was conducted by both reviewers. Data describing major factors associated with caregiving experiences of children with parental mental illnesses were synthesized and reported in narrative form. Five studies were considered eligible and included in this review. Findings are organized under major themes such as the impact of parental mental illness on children’s daily life, how children provide care to their mentally ill parents as primary carers, social and relationship factors associated with their caregiving, positive and negative experiences in caregiving and how children cope with their experiences with parental mental illnesses. Literature relating to the caregiving experiences of children with parental mental illnesses is sparse. More research is required to better understand the children’s caregiving experiences related to parental mental illnesses so as to better inform management for enhancing their mental health, wellbeing, and caregiving practice.

Keywords: caregiving experience, children, parental mental illnesses, wellbeing

Procedia PDF Downloads 124
9497 Barriers and Enablers to Climate and Health Adaptation Planning in Small Urban Areas in the Great Lakes Region

Authors: Elena Cangelosi, Wayne Beyea

Abstract:

This research expands the resilience planning literature by exploring the barriers and enablers to climate and health adaptation planning for small urban, coastal Great Lakes communities. With funding from the United States Centers for Disease Control and Prevention (CDC) Climate Ready City and States Initiative, this research took place during a 3-year pilot intervention project which integrates urban planning and public health. The project used the CDC’s Building Resilience Against Climate Effects (BRACE) framework to prevent or reduce the human health impacts from climate change in Marquette County, Michigan. Using a deliberation with the analysis planning process, interviews, focus groups, and community meetings with over 25 stakeholder groups and over 100 participants identified the area’s climate-related health concerns and adaptation interventions to address those concerns. Marquette County, on the shores of Lake Superior, the largest of the Great Lakes, was selected for the project based on their existing adaptive capacity and proactive approach to climate adaptation planning. With Marquette County as the context, this study fills a gap in the adaptation literature, which currently heavily emphasizes large-urban or agriculturally-based rural areas, and largely neglects small urban areas. This research builds on the qualitative case-study, survey, and interview approach established by previous researchers on contextual barriers and enablers for adaptation planning. This research uses a case study approach, including surveys and interviews of public officials, to identify the barriers and enablers for climate and health adaptation planning for small-urban areas within a large, non-agricultural, Great Lakes county. The researchers hypothesize that the barriers and enablers will, in some cases, overlap those found in other contexts, but in many cases, will be unique to a rural setting. The study reveals that funding, staff capacity, and communication across a large, rural geography act as the main barriers, while strong networks and collaboration, interested leaders, and community interest through a strong human-land connection act as the primary enablers. Challenges unique to rural areas are revealed, including weak opportunities for grant funding, large geographical distances, communication challenges with an aging and remote population, and the out-migration of education residents. Enablers that may be unique to rural contexts include strong collaborative relationships across jurisdictions for regional work and strong connections between residents and the land. As the factors that enable and prevent climate change planning are highly contextual, understanding, and appropriately addressing the unique factors at play for small-urban communities is key for effective planning in those areas. By identifying and addressing the barriers and enablers to climate and health adaptation planning for small-urban, coastal areas, this study can help Great Lakes communities appropriately build resilience to the adverse impacts of climate change. In addition, this research expands the breadth of research and understanding of the challenges and opportunities planners confront in the face of climate change.

Keywords: climate adaptation and resilience, climate change adaptation, climate change and urban resilience, governance and urban resilience

Procedia PDF Downloads 106
9496 Neurofeedback for Anorexia-RelaxNeuron-Aimed in Dissolving the Root Neuronal Cause

Authors: Kana Matsuyanagi

Abstract:

Anorexia Nervosa (AN) is a psychiatric disorder characterized by a relentless pursuit of thinness and strict restriction of food. The current therapeutic approaches for AN predominantly revolve around outpatient psychotherapies, which create significant financial barriers for the majority of affected patients, hindering their access to treatment. Nonetheless, AN exhibit one of the highest mortality and relapse rates among psychological disorders, underscoring the urgent need to provide patients with an affordable self-treatment tool, enabling those unable to access conventional medical intervention to address their condition autonomously. To this end, a neurofeedback software, termed RelaxNeuron, was developed with the objective of providing an economical and portable means to aid individuals in self-managing AN. Electroencephalography (EEG) was chosen as the preferred modality for RelaxNeuron, as it aligns with the study's goal of supplying a cost-effective and convenient solution for addressing AN. The primary aim of the software is to ameliorate the negative emotional responses towards food stimuli and the accompanying aberrant eye-tracking patterns observed in AN patient, ultimately alleviating the profound fear towards food an elemental symptom and, conceivably, the fundamental etiology of AN. The core functionality of RelaxNeuron hinges on the acquisition and analysis of EEG signals, alongside an electrocardiogram (ECG) signal, to infer the user's emotional state while viewing dynamic food-related imagery on the screen. Moreover, the software quantifies the user's performance in accurately tracking the moving food image. Subsequently, these two parameters undergo further processing in the subsequent algorithm, informing the delivery of either negative or positive feedback to the user. Preliminary test results have exhibited promising outcomes, suggesting the potential advantages of employing RelaxNeuron in the treatment of AN, as evidenced by its capacity to enhance emotional regulation and attentional processing through repetitive and persistent therapeutic interventions.

Keywords: Anorexia Nervosa, fear conditioning, neurofeedback, BCI

Procedia PDF Downloads 20
9495 A Conceptual Framework of Impact of Lean on the Performance of Construction Industry

Authors: Jaber Shurrab, Matloub Hussain

Abstract:

The rapid pace of changes in the construction industry, technological advancements, and rising costs present tremendous challenges for project managers. Project managers are under severe pressure to minimize the waste, improve the efficiency of the entire operations and the philosophy of ‘lean thinking’ so that ‘more could be achieved with less’ is becoming very popular. Though, lean management has strong roots in manufacturing industry and over the last decade lean philosophy has started gaining attention in the service industry as well. However, little has been known in the context of waste minimization and lean implementation in the construction industry and this paper deals with this important issue. The primary objective of this paper is to propose a conceptual framework for the exploration of appropriate lean techniques applicable to medium and large construction companies and measure their impact on the competitiveness and economic performance of construction companies of United Arab Emirates (UAE). To this end, a comprehensive literature review and interviews with eight project managers of medium and large construction companies of UAE have been conducted. It has been found that competitive, reduce waste and costs are critical to the construction industry. This is an ongoing research in lean management, giving project managers a practical framework for improving the efficiency of their project through various lean techniques. Originality/value: Research significance emphasizes increasing the effectiveness of the construction industry, influences the development of lean construction framework which improves lean construction practices using the lean techniques. This contributes to the effort of applying lean techniques in the construction industry. Limited publications were done in the construction industry mainly in United Arab Emirates (UAE) compared to the lean manufacturing. This research will recommend a systematic approach for the implementing of the anticipated framework within a cyclical look-ahead period and emphasizes the practical implications of the proposed approach.

Keywords: construction, lean, lean manufacturing, waste

Procedia PDF Downloads 271
9494 DHL CSI Solution Design Project

Authors: Mohammed Al-Yamani, Yaser Miaji

Abstract:

DHL Customer Solutions and Innovation Department (CSI) have been experiencing difficulties while comparing quotes for different customers in different years. Currently, the employees are processing data by opening several loaded Excel files where the quotes are and manually copying values to another Excel Workbook where the comparison is made. This project consists of developing a new and effective database for DHL CSI department so that information is stored altogether on the same catalog. That being said, we have been assigned to find an efficient algorithm that can deal with the different formats of the Excel Workbooks to copy and store the express customer rates for core products (DOX, WPX, IMP) for comparisons purposes.

Keywords: DHL, solution design, ORACLE, EXCEL

Procedia PDF Downloads 394
9493 Comparison of Artificial Neural Networks and Statistical Classifiers in Olive Sorting Using Near-Infrared Spectroscopy

Authors: İsmail Kavdır, M. Burak Büyükcan, Ferhat Kurtulmuş

Abstract:

Table olive is a valuable product especially in Mediterranean countries. It is usually consumed after some fermentation process. Defects happened naturally or as a result of an impact while olives are still fresh may become more distinct after processing period. Defected olives are not desired both in table olive and olive oil industries as it will affect the final product quality and reduce market prices considerably. Therefore it is critical to sort table olives before processing or even after processing according to their quality and surface defects. However, doing manual sorting has many drawbacks such as high expenses, subjectivity, tediousness and inconsistency. Quality criterions for green olives were accepted as color and free of mechanical defects, wrinkling, surface blemishes and rotting. In this study, it was aimed to classify fresh table olives using different classifiers and NIR spectroscopy readings and also to compare the classifiers. For this purpose, green (Ayvalik variety) olives were classified based on their surface feature properties such as defect-free, with bruised defect and with fly defect using FT-NIR spectroscopy and classification algorithms such as artificial neural networks, ident and cluster. Bruker multi-purpose analyzer (MPA) FT-NIR spectrometer (Bruker Optik, GmbH, Ettlingen Germany) was used for spectral measurements. The spectrometer was equipped with InGaAs detectors (TE-InGaAs internal for reflectance and RT-InGaAs external for transmittance) and a 20-watt high intensity tungsten–halogen NIR light source. Reflectance measurements were performed with a fiber optic probe (type IN 261) which covered the wavelengths between 780–2500 nm, while transmittance measurements were performed between 800 and 1725 nm. Thirty-two scans were acquired for each reflectance spectrum in about 15.32 s while 128 scans were obtained for transmittance in about 62 s. Resolution was 8 cm⁻¹ for both spectral measurement modes. Instrument control was done using OPUS software (Bruker Optik, GmbH, Ettlingen Germany). Classification applications were performed using three classifiers; Backpropagation Neural Networks, ident and cluster classification algorithms. For these classification applications, Neural Network tool box in Matlab, ident and cluster modules in OPUS software were used. Classifications were performed considering different scenarios; two quality conditions at once (good vs bruised, good vs fly defect) and three quality conditions at once (good, bruised and fly defect). Two spectrometer readings were used in classification applications; reflectance and transmittance. Classification results obtained using artificial neural networks algorithm in discriminating good olives from bruised olives, from olives with fly defect and from the olive group including both bruised and fly defected olives with success rates respectively changing between 97 and 99%, 61 and 94% and between 58.67 and 92%. On the other hand, classification results obtained for discriminating good olives from bruised ones and also for discriminating good olives from fly defected olives using the ident method ranged between 75-97.5% and 32.5-57.5%, respectfully; results obtained for the same classification applications using the cluster method ranged between 52.5-97.5% and between 22.5-57.5%.

Keywords: artificial neural networks, statistical classifiers, NIR spectroscopy, reflectance, transmittance

Procedia PDF Downloads 232
9492 Development of Programmed Cell Death Protein 1 Pathway-Associated Prognostic Biomarkers for Bladder Cancer Using Transcriptomic Databases

Authors: Shu-Pin Huang, Pai-Chi Teng, Hao-Han Chang, Chia-Hsin Liu, Yung-Lun Lin, Shu-Chi Wang, Hsin-Chih Yeh, Chih-Pin Chuu, Jiun-Hung Geng, Li-Hsin Chang, Wei-Chung Cheng, Chia-Yang Li

Abstract:

The emergence of immune checkpoint inhibitors (ICIs) targeting proteins like PD-1 and PD-L1 has changed the treatment paradigm of bladder cancer. However, not all patients benefit from ICIs, with some experiencing early death. There's a significant need for biomarkers associated with the PD-1 pathway in bladder cancer. Current biomarkers focus on tumor PD-L1 expression, but a more comprehensive understanding of PD-1-related biology is needed. Our study has developed a seven-gene risk score panel, employing a comprehensive bioinformatics strategy, which could serve as a potential prognostic and predictive biomarker for bladder cancer. This panel incorporates the FYN, GRAP2, TRIB3, MAP3K8, AKT3, CD274, and CD80 genes. Additionally, we examined the relationship between this panel and immune cell function, utilizing validated tools such as ESTIMATE, TIDE, and CIBERSORT. Our seven-genes panel has been found to be significantly associated with bladder cancer survival in two independent cohorts. The panel was also significantly correlated with tumor infiltration lymphocytes, immune scores, and tumor purity. These factors have been previously reported to have clinical implications on ICIs. The findings suggest the potential of a PD-1 pathway-based transcriptomic panel as a prognostic and predictive biomarker in bladder cancer, which could help optimize treatment strategies and improve patient outcomes.

Keywords: bladder cancer, programmed cell death protein 1, prognostic biomarker, immune checkpoint inhibitors, predictive biomarker

Procedia PDF Downloads 61
9491 Numerical Tools for Designing Multilayer Viscoelastic Damping Devices

Authors: Mohammed Saleh Rezk, Reza Kashani

Abstract:

Auxiliary damping has gained popularity in recent years, especially in structures such as mid- and high-rise buildings. Distributed damping systems (typically viscous and viscoelastic) or reactive damping systems (such as tuned mass dampers) are the two types of damping choices for such structures. Distributed VE dampers are normally configured as braces or damping panels, which are engaged through relatively small movements between the structural members when the structure sways under wind or earthquake loading. In addition to being used as stand-alone dampers in distributed damping applications, VE dampers can also be incorporated into the suspension element of tuned mass dampers (TMDs). In this study, analytical and numerical tools for modeling and design of multilayer viscoelastic damping devices to be used in dampening the vibration of large structures are developed. Considering the limitations of analytical models for the synthesis and analysis of realistic, large, multilayer VE dampers, the emphasis of the study has been on numerical modeling using the finite element method. To verify the finite element models, a two-layer VE damper using ½ inch synthetic viscoelastic urethane polymer was built, tested, and the measured parameters were compared with the numerically predicted ones. The numerical model prediction and experimentally evaluated damping and stiffness of the test VE damper were in very good agreement. The effectiveness of VE dampers in adding auxiliary damping to larger structures is numerically demonstrated by chevron bracing one such damper numerically into the model of a massive frame subject to an abrupt lateral load. A comparison of the responses of the frame to the aforementioned load, without and with the VE damper, clearly shows the efficacy of the damper in lowering the extent of frame vibration.

Keywords: viscoelastic, damper, distributed damping, tuned mass damper

Procedia PDF Downloads 91
9490 Modeling and Simulating Productivity Loss Due to Project Changes

Authors: Robert Pellerin, Michel Gamache, Remi Trudeau, Nathalie Perrier

Abstract:

The context of large engineering projects is particularly favorable to the appearance of engineering changes and contractual modifications. These elements are potential causes for claims. In this paper, we investigate one of the critical components of the claim management process: the calculation of the impacts of changes in terms of losses of productivity due to the need to accelerate some project activities. When project changes are initiated, delays can arise. Indeed, project activities are often executed in fast-tracking in an attempt to respect the completion date. But the acceleration of project execution and the resulting rework can entail important costs as well as induce productivity losses. In the past, numerous methods have been proposed to quantify the duration of delays, the gains achieved by project acceleration, and the loss of productivity. The calculation related to those changes can be divided into two categories: direct cost and indirect cost. The direct cost is easily quantifiable as opposed to indirect costs which are rarely taken into account during the calculation of the cost of an engineering change or contract modification despite several research projects have been made on this subject. However, proposed models have not been accepted by companies yet, nor they have been accepted in court. Those models require extensive data and are often seen as too specific to be used for all projects. These techniques are also ignoring the resource constraints and the interdependencies between the causes of delays and the delays themselves. To resolve this issue, this research proposes a simulation model that mimics how major engineering changes or contract modifications are handled in large construction projects. The model replicates the use of overtime in a reactive scheduling mode in order to simulate the loss of productivity present when a project change occurs. Multiple tests were conducted to compare the results of the proposed simulation model with statistical analysis conducted by other researchers. Different scenarios were also conducted in order to determine the impact the number of activities, the time of occurrence of the change, the availability of resources, and the type of project changes on productivity loss. Our results demonstrate that the number of activities in the project is a critical variable influencing the productivity of a project. When changes occur, the presence of a large number of activities leads to a much lower productivity loss than a small number of activities. The speed of reducing productivity for 30-job projects is about 25 percent faster than the reduction speed for 120-job projects. The moment of occurrence of a change also shows a significant impact on productivity. Indeed, the sooner the change occurs, the lower the productivity of the labor force. The availability of resources also impacts the productivity of a project when a change is implemented. There is a higher loss of productivity when the amount of resources is restricted.

Keywords: engineering changes, indirect costs overtime, productivity, scheduling, simulation

Procedia PDF Downloads 228
9489 Characteristics of Inclusive Circular Business Models in Social Entrepreneurship

Authors: Svitlana Yermak, Olubukola Aluko

Abstract:

The purpose of this study was a literature review on the topic of social entrepreneurship, a review of new trends and best practices, the study of existing inclusive business models and their interaction with the principles of the circular economy for possible implementation in the practice of Ukraine in war and post-war times in conditions of scarce resources. Thus, three research questions were identified and substantiated: to determine the characteristics of social entrepreneurship, consider the features in Ukraine and the UK; highlight the criteria for inclusion in social entrepreneurship and its legal support; explore examples of existing inclusive circular business models to illustrate how the two concepts may be combined. A detailed review of the literature selected from the Scopus and Web of Science databases was carried out. The study revealed signs of social entrepreneurship, the main of which are doing business and making a profit, as well as the social orientation of the business, which is prescribed in the constituent documents of the enterprise immediately upon its creation. Considered are the characteristics of social entrepreneurship in the UK and Ukraine. It has been established that in the UK, social entrepreneurship is clearly regulated by the state; there are special legislative norms and support programs, in contrast to Ukraine, where these processes are only partially regulated. The study identified the main criteria for inclusion in inclusive circular business models: economic (sustainability and efficiency, job creation and economic growth, promotion of local development), social (accessibility, equity and fairness, inclusion and participation), and resources in their interconnection. It is substantiated that the resource criterion is especially important for this type of business model. It provides for the efficient and sustainable use of resources, as well as the cyclical nature of resources. And it was concluded that the principles of the circular economy not only do not contradict but, on the contrary, complement and expand the inclusive business models on which social entrepreneurship is based.

Keywords: social entrepreneurship, inclusive business models, circular economy, inclusion criteria

Procedia PDF Downloads 81
9488 Developing a Cultural Policy Framework for Small Towns and Cities

Authors: Raymond Ndhlovu, Jen Snowball

Abstract:

It has long been known that the Cultural and Creative Industries (CCIs) have the potential to aid in physical, social and economic renewal and regeneration of towns and cities, hence their importance when dealing with regional development. The CCIs can act as a catalyst for activity and investment in an area because the ‘consumption’ of cultural activities will lead to the activities and use of other non-cultural activities, for example, hospitality development including restaurants and bars, as well as public transport. ‘Consumption’ of cultural activities also leads to employment creation, and diversification. However, CCIs tend to be clustered, especially around large cities. There is, moreover, a case for development of CCIs around smaller towns and cities, because they do not rely on high technology inputs, and long supply chains, and, their direct link to rural and isolated places makes them vital in regional development. However, there is currently little research on how to craft cultural policy for regions with smaller towns and cities. Using the Sarah Baartman District (SBDM) in South Africa as an example, this paper describes the process of developing cultural policy for a region that has potential, and existing, cultural clusters, but currently no one, coherent policy relating to CCI development. The SBDM was chosen as a case study because it has no large cities, but has some CCI clusters, and has identified them as potential drivers of local economic development. The process of developing cultural policy is discussed in stages: Identification of what resources are present; including human resources, soft and hard infrastructure; Identification of clusters; Analysis of CCI labour markets and ownership patterns; Opportunities and challenges from the point of view of CCIs and other key stakeholders; Alignment of regional policy aims with provincial and national policy objectives; and finally, design and implementation of a regional cultural policy.

Keywords: cultural and creative industries, economic impact, intrinsic value, regional development

Procedia PDF Downloads 216
9487 Synthesis of Electrospun Polydimethylsiloxane (PDMS)/Polyvinylidene Fluoriure (PVDF) Nanofibrous Membranes for CO₂ Capture

Authors: Wen-Wen Wang, Qian Ye, Yi-Feng Lin

Abstract:

Carbon dioxide emissions are expected to increase continuously, resulting in climate change and global warming. As a result, CO₂ capture has attracted a large amount of research attention. Among the various CO₂ capture methods, membrane technology has proven to be highly efficient in capturing CO₂, because it can be scaled up, low energy consumptions and small area requirements for use by the gas separation. Various nanofibrous membranes were successfully prepared by a simple electrospinning process. The membrane contactor combined with chemical absorption and membrane process in the post-combustion CO₂ capture is used in this study. In a membrane contactor system, the highly porous and water-repellent nanofibrous membranes were used as a gas-liquid interface in a membrane contactor system for CO₂ absorption. In this work, we successfully prepared the polyvinylidene fluoride (PVDF) porous membranes with an electrospinning process. Afterwards, the as-prepared water-repellent PVDF porous membranes were used for the CO₂ capture application. However, the pristine PVDF nanofibrous membranes were wetted by the amine absorbents, resulting in the decrease in the CO₂ absorption flux, the hydrophobic polydimethylsiloxane (PDMS) materials were added into the PVDF nanofibrous membranes to improve the solvent resistance of the membranes. To increase the hydrophobic properties and CO₂ absorption flux, more hydrophobic surfaces of the PDMS/PVDF nanofibrous membranes are obtained by the grafting of fluoroalkylsilane (FAS) on the membranes surface. Furthermore, the highest CO₂ absorption flux of the PDMS/PVDF nanofibrous membranes is reached after the FAS modification with four times. The PDMS/PVDF nanofibrous membranes with 60 wt% PDMS addition can be a long and continuous operation of the CO₂ absorption and regeneration experiments. It demonstrates the as-prepared PDMS/PVDF nanofibrous membranes could potentially be used for large-scale CO₂ absorption during the post-combustion process in power plants.

Keywords: CO₂ capture, electrospinning process, membrane contactor, nanofibrous membranes, PDMS/PVDF

Procedia PDF Downloads 262
9486 Contextual SenSe Model: Word Sense Disambiguation using Sense and Sense Value of Context Surrounding the Target

Authors: Vishal Raj, Noorhan Abbas

Abstract:

Ambiguity in NLP (Natural language processing) refers to the ability of a word, phrase, sentence, or text to have multiple meanings. This results in various kinds of ambiguities such as lexical, syntactic, semantic, anaphoric and referential am-biguities. This study is focused mainly on solving the issue of Lexical ambiguity. Word Sense Disambiguation (WSD) is an NLP technique that aims to resolve lexical ambiguity by determining the correct meaning of a word within a given context. Most WSD solutions rely on words for training and testing, but we have used lemma and Part of Speech (POS) tokens of words for training and testing. Lemma adds generality and POS adds properties of word into token. We have designed a novel method to create an affinity matrix to calculate the affinity be-tween any pair of lemma_POS (a token where lemma and POS of word are joined by underscore) of given training set. Additionally, we have devised an al-gorithm to create the sense clusters of tokens using affinity matrix under hierar-chy of POS of lemma. Furthermore, three different mechanisms to predict the sense of target word using the affinity/similarity value are devised. Each contex-tual token contributes to the sense of target word with some value and whichever sense gets higher value becomes the sense of target word. So, contextual tokens play a key role in creating sense clusters and predicting the sense of target word, hence, the model is named Contextual SenSe Model (CSM). CSM exhibits a noteworthy simplicity and explication lucidity in contrast to contemporary deep learning models characterized by intricacy, time-intensive processes, and chal-lenging explication. CSM is trained on SemCor training data and evaluated on SemEval test dataset. The results indicate that despite the naivety of the method, it achieves promising results when compared to the Most Frequent Sense (MFS) model.

Keywords: word sense disambiguation (wsd), contextual sense model (csm), most frequent sense (mfs), part of speech (pos), natural language processing (nlp), oov (out of vocabulary), lemma_pos (a token where lemma and pos of word are joined by underscore), information retrieval (ir), machine translation (mt)

Procedia PDF Downloads 88
9485 Enhancing the Flotation of Fine and Ultrafine Pyrite Particles Using Electrolytically Generated Bubbles

Authors: Bogale Tadesse, Krutik Parikh, Ndagha Mkandawire, Boris Albijanic, Nimal Subasinghe

Abstract:

It is well established that the floatability and selectivity of mineral particles are highly dependent on the particle size. Generally, a particle size of 10 micron is considered as the critical size below which both flotation selectivity and recovery decline sharply. It is widely accepted that the majority of ultrafine particles, including highly liberated valuable minerals, will be lost in tailings during a conventional flotation process. This is highly undesirable particularly in the processing of finely disseminated complex and refractory ores where there is a requirement for fine grinding in order to liberate the valuable minerals. In addition, the continuing decline in ore grade worldwide necessitates intensive processing of low grade mineral deposits. Recent advances in comminution allow the economic grinding of particles down to 10 micron sizes to enhance the probability of liberating locked minerals from low grade ores. Thus, it is timely that the flotation of fine and ultrafine particles is improved in order to reduce the amount of valuable minerals lost as slimes. It is believed that the use of fine bubbles in flotation increases the bubble-particle collision efficiency and hence the flotation performance. Electroflotation, where bubbles are generated by the electrolytic breakdown of water to produce oxygen and hydrogen gases, leads to the formation of extremely finely dispersed gas bubbles with dimensions varying from 5 to 95 micron. The sizes of bubbles generated by this method are significantly smaller than those found in conventional flotation (> 600 micron). In this study, microbubbles generated by electrolysis of water were injected into a bench top flotation cell to assess the performance electroflotation in enhancing the flotation of fine and ultrafine pyrite particles of sizes ranging from 5 to 53 micron. The design of the cell and the results from optimization of the process variables such as current density, pH, percent solid and particle size will be presented at this conference.

Keywords: electroflotation, fine bubbles, pyrite, ultrafine particles

Procedia PDF Downloads 312
9484 Water Quality, Safety and Drowning Prevention to Preschool Children in Sub-Saharan Africa

Authors: Amos King'ori Githu

Abstract:

Water safety is crucial for all ages, but particularly for children. In the past decade, preschool institutions in Sub-Saharan Africa have seen the inclusion of swimming as one of the co-curricular activities. However, these countries face challenges in adopting frameworks, staffing, and resources to heighten water safety, quality, and drowning prevention, hence the focus of this research. It is worth noting that drowning is a leading cause of injury-related deaths among children. Universally, the highest drowning rates occur among children aged 1-4 years and 5-9 years. Preschool children even stand a higher risk of drowning as they are active, eager, and curious to explore their environment. If not supervised closely around or in water, these children can drown quickly in just a few inches of water. Thus, this empirical review focuses on the identification, assessment, and analysis of water safety efforts to curb drowning among children and assess the quality of water to mitigate contamination that may eventually pose infection risks to the children. In addition, it outlines the use of behavioral theories and evaluation frameworks to guide the above. Notably, a search on ten databases was adopted for crucial peer-reviewed articles, and five were selected in the eventual review. This research relied extensively on secondary data to curb water infections and drowning-inflicted deaths among children. It suffices to say that interventions must be supported that adopt an array of strategies, are guided by planning and theory as well as evaluation frameworks, and are vast in intervention design, evaluation, and delivery methodology. Finally, this approach will offer solid evidence that can be shared to guide future practices and policies in preschools on child safety and drowning prevention.

Keywords: water quality and safety, drowning prevention, preschool children, sub-saharan Africa, supervision

Procedia PDF Downloads 41
9483 A Simple Model for Solar Panel Efficiency

Authors: Stefano M. Spagocci

Abstract:

The efficiency of photovoltaic panels can be calculated with such software packages as RETScreen that allow design engineers to take financial as well as technical considerations into account. RETScreen is interfaced with meteorological databases, so that efficiency calculations can be realistically carried out. The author has recently contributed to the development of solar modules with accumulation capability and an embedded water purifier, aimed at off-grid users such as users in developing countries. The software packages examined do not allow to take ancillary equipment into account, hence the decision to implement a technical and financial model of the system. The author realized that, rather than re-implementing the quite sophisticated model of RETScreen - a mathematical description of which is anyway not publicly available - it was possible to drastically simplify it, including the meteorological factors which, in RETScreen, are presented in a numerical form. The day-by-day efficiency of a photovoltaic solar panel was parametrized by the product of factors expressing, respectively, daytime duration, solar right ascension motion, solar declination motion, cloudiness, temperature. For the sun-motion-dependent factors, positional astronomy formulae, simplified by the author, were employed. Meteorology-dependent factors were fitted by simple trigonometric functions, employing numerical data supplied by RETScreen. The accuracy of our model was tested by comparing it to the predictions of RETScreen; the accuracy obtained was 11%. In conclusion, our study resulted in a model that can be easily implemented in a spreadsheet - thus being easily manageable by non-specialist personnel - or in more sophisticated software packages. The model was used in a number of design exercises, concerning photovoltaic solar panels and ancillary equipment like the above-mentioned water purifier.

Keywords: clean energy, energy engineering, mathematical modelling, photovoltaic panels, solar energy

Procedia PDF Downloads 34
9482 The Direct Deconvolutional Model in the Large-Eddy Simulation of Turbulence

Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang

Abstract:

The utilization of Large Eddy Simulation (LES) has been extensive in turbulence research. LES concentrates on resolving the significant grid-scale motions while representing smaller scales through subfilter-scale (SFS) models. The deconvolution model, among the available SFS models, has proven successful in LES of engineering and geophysical flows. Nevertheless, the thorough investigation of how sub-filter scale dynamics and filter anisotropy affect SFS modeling accuracy remains lacking. The outcomes of LES are significantly influenced by filter selection and grid anisotropy, factors that have not been adequately addressed in earlier studies. This study examines two crucial aspects of LES: Firstly, the accuracy of direct deconvolution models (DDM) is evaluated concerning sub-filter scale (SFS) dynamics across varying filter-to-grid ratios (FGR) in isotropic turbulence. Various invertible filters are employed, including Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The importance of FGR becomes evident as it plays a critical role in controlling errors for precise SFS stress prediction. When FGR is set to 1, the DDM models struggle to faithfully reconstruct SFS stress due to inadequate resolution of SFS dynamics. Notably, prediction accuracy improves when FGR is set to 2, leading to accurate reconstruction of SFS stress, except for cases involving Helmholtz I and II filters. Remarkably high precision, nearly 100%, is achieved at an FGR of 4 for all DDM models. Furthermore, the study extends to filter anisotropy and its impact on SFS dynamics and LES accuracy. By utilizing the dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with anisotropic filters, aspect ratios (AR) ranging from 1 to 16 are examined in LES filters. The results emphasize the DDM’s proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. Notably high correlation coefficients exceeding 90% are observed in the a priori study for the DDM’s reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as filter anisotropy increases. In the a posteriori analysis, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, including velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strainrate tensors, and SFS stress. It is evident that as filter anisotropy intensifies, the results of DSM and DMM deteriorate, while the DDM consistently delivers satisfactory outcomes across all filter-anisotropy scenarios. These findings underscore the potential of the DDM framework as a valuable tool for advancing the development of sophisticated SFS models for LES in turbulence research.

Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence

Procedia PDF Downloads 57