Search results for: linear complexity
3899 Detection of Curvilinear Structure via Recursive Anisotropic Diffusion
Authors: Sardorbek Numonov, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Dongeun Choi, Byung-Woo Hong
Abstract:
The detection of curvilinear structures often plays an important role in the analysis of images. In particular, it is considered as a crucial step for the diagnosis of chronic respiratory diseases to localize the fissures in chest CT imagery where the lung is divided into five lobes by the fissures that are characterized by linear features in appearance. However, the characteristic linear features for the fissures are often shown to be subtle due to the high intensity variability, pathological deformation or image noise involved in the imaging procedure, which leads to the uncertainty in the quantification of anatomical or functional properties of the lung. Thus, it is desired to enhance the linear features present in the chest CT images so that the distinctiveness in the delineation of the lobe is improved. We propose a recursive diffusion process that prefers coherent features based on the analysis of structure tensor in an anisotropic manner. The local image features associated with certain scales and directions can be characterized by the eigenanalysis of the structure tensor that is often regularized via isotropic diffusion filters. However, the isotropic diffusion filters involved in the computation of the structure tensor generally blur geometrically significant structure of the features leading to the degradation of the characteristic power in the feature space. Thus, it is required to take into consideration of local structure of the feature in scale and direction when computing the structure tensor. We apply an anisotropic diffusion in consideration of scale and direction of the features in the computation of the structure tensor that subsequently provides the geometrical structure of the features by its eigenanalysis that determines the shape of the anisotropic diffusion kernel. The recursive application of the anisotropic diffusion with the kernel the shape of which is derived from the structure tensor leading to the anisotropic scale-space where the geometrical features are preserved via the eigenanalysis of the structure tensor computed from the diffused image. The recursive interaction between the anisotropic diffusion based on the geometry-driven kernels and the computation of the structure tensor that determines the shape of the diffusion kernels yields a scale-space where geometrical properties of the image structure are effectively characterized. We apply our recursive anisotropic diffusion algorithm to the detection of curvilinear structure in the chest CT imagery where the fissures present curvilinear features and define the boundary of lobes. It is shown that our algorithm yields precise detection of the fissures while overcoming the subtlety in defining the characteristic linear features. The quantitative evaluation demonstrates the robustness and effectiveness of the proposed algorithm for the detection of fissures in the chest CT in terms of the false positive and the true positive measures. The receiver operating characteristic curves indicate the potential of our algorithm as a segmentation tool in the clinical environment. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).Keywords: anisotropic diffusion, chest CT imagery, chronic respiratory disease, curvilinear structure, fissure detection, structure tensor
Procedia PDF Downloads 2323898 The Effect of Linear Low-Density Polyethylene Cross-Contamination by Other Plastic Types on Bitumen Modification
Authors: Nioushasadat Haji Seyed Javadi, Ailar Hajimohammadi, Nasser Khalili
Abstract:
Currently, the recycling of plastic wastes has been the subject of much research attention, especially in pavement constructions, where virgin polymers can be replaced by recycled plastics for asphalt binder modification. Among the plastic types, recycled linear low-density polyethylene (RLLDPE) has been one of the common and largely available plastics for bitumen modification. However, it is important to note that during the recycling process, LLDPE can easily be contaminated with other plastic types, especially with low-density polyethylene (LDPE), high-density polyethylene (HDPE), and polypropylene (PP). The cross-contamination of LLDPE with other plastics lowers its quality and, consequently, can affect the asphalt modification process. This study aims to assess the effect of LLDPE cross-contamination on bitumen modification. To do so, samples of bitumen modified with LLDPE and blends of LLDPE with LDPE, HDPE, and PP were prepared and compared through physical and rheological evaluations. The experimental tests, including softening point, penetration, viscosity at 135 °C, and dynamic shear rheometer, were conducted. The results indicated that the effect of cross-contamination on softening point and rutting resistance was negligible. On the other side, penetration and viscosity were highly impacted. The results also showed that among contamination of LLDPE with the other plastic types, PP had the highest influence in comparison with HDPE and LDPE on changing the properties of the LLDPE- modified bitumen.Keywords: recycled polyethylene, polymer cross-contamination, waste plastic, bitumen, rutting resistance
Procedia PDF Downloads 1273897 Normalized Compression Distance Based Scene Alteration Analysis of a Video
Authors: Lakshay Kharbanda, Aabhas Chauhan
Abstract:
In this paper, an application of Normalized Compression Distance (NCD) to detect notable scene alterations occurring in videos is presented. Several research groups have been developing methods to perform image classification using NCD, a computable approximation to Normalized Information Distance (NID) by studying the degree of similarity in images. The timeframes where significant aberrations between the frames of a video have occurred have been identified by obtaining a threshold NCD value, using two compressors: LZMA and BZIP2 and defining scene alterations using Pixel Difference Percentage metrics.Keywords: image compression, Kolmogorov complexity, normalized compression distance, root mean square error
Procedia PDF Downloads 3403896 2D Numerical Modeling of Ultrasonic Measurements in Concrete: Wave Propagation in a Multiple-Scattering Medium
Authors: T. Yu, L. Audibert, J. F. Chaix, D. Komatitsch, V. Garnier, J. M. Henault
Abstract:
Linear Ultrasonic Techniques play a major role in Non-Destructive Evaluation (NDE) for civil engineering structures in concrete since they can meet operational requirements. Interpretation of ultrasonic measurements could be improved by a better understanding of ultrasonic wave propagation in a multiple scattering medium. This work aims to develop a 2D numerical model of ultrasonic wave propagation in a heterogeneous medium, like concrete, integrating the multiple scattering phenomena in SPECFEM software. The coherent field of multiple scattering is obtained by averaging numerical wave fields, and it is used to determine the effective phase velocity and attenuation corresponding to an equivalent homogeneous medium. First, this model is applied to one scattering element (a cylinder) in a homogenous medium in a linear-elastic system, and its validation is completed thanks to the comparison with analytical solution. Then, some cases of multiple scattering by a set of randomly located cylinders or polygons are simulated to perform parametric studies on the influence of frequency and scatterer size, concentration, and shape. Also, the effective properties are compared with the predictions of Waterman-Truell model to verify its validity. Finally, the mortar viscoelastic behavior is introduced in the simulation in order to considerer the dispersion and the attenuation due to porosity included in the cement paste. In the future, different steps will be developed: The comparisons with experimental results, the interpretation of NDE measurements, and the optimization of NDE parameters before an auscultation.Keywords: attenuation, multiple-scattering medium, numerical modeling, phase velocity, ultrasonic measurements
Procedia PDF Downloads 2753895 Assessment of the Efficacy of Routine Medical Tests in Screening Medical Radiation Staff in Shiraz University of Medical Sciences Educational Centers
Authors: Z. Razi, S. M. J. Mortazavi, N. Shokrpour, Z. Shayan, F. Amiri
Abstract:
Long-term exposure to low doses of ionizing radiation occurs in radiation health care workplaces. Although doses in health professions are generally very low, there are still matters of concern. The radiation safety program promotes occupational radiation safety through accurate and reliable monitoring of radiation workers in order to effectively manage radiation protection. To achieve this goal, it has become mandatory to implement health examination periodically. As a result, based on the hematological alterations, working populations with a common occupational radiation history are screened. This paper calls into question the effectiveness of blood component analysis as a screening program which is mandatory for medical radiation workers in some countries. This study details the distribution and trends of changes in blood components, including white blood cells (WBCs), red blood cells (RBCs) and platelets as well as received cumulative doses from occupational radiation exposure. This study was conducted among 199 participants and 100 control subjects at the medical imaging departments at the central hospital of Shiraz University of Medical Sciences during the years 2006–2010. Descriptive and analytical statistics, considering the P-value<0.05 as statistically significance was used for data analysis. The results of this study show that there is no significant difference between the radiation workers and controls regarding WBCs and platelet count during 4 years. Also, we have found no statistically significant difference between the two groups with respect to RBCs. Besides, no statistically significant difference was observed with respect to RBCs with regards to gender, which has been analyzed separately because of the lower reference range for normal RBCs levels in women compared to men and. Moreover, the findings confirm that in a separate evaluation between WBCs count and the personnel’s working experience and their annual exposure dose, results showed no linear correlation between the three variables. Since the hematological findings were within the range of control levels, it can be concluded that the radiation dosage (which was not more than 7.58 mSv in this study) had been too small to stimulate any quantifiable change in medical radiation worker’s blood count. Thus, use of more accurate method for screening program based on the working profile of the radiation workers and their accumulated dose is suggested. In addition, complexity of radiation-induced functions and the influence of various factors on blood count alteration should be taken into account.Keywords: blood cell count, mandatory testing, occupational exposure, radiation
Procedia PDF Downloads 4613894 Thermoluminescent Response of Nanocrystalline BaSO4:Eu to 85 MeV Carbon Beams
Authors: Shaila Bahl, S. P. Lochab, Pratik Kumar
Abstract:
Nanotechnology and nanomaterials have attracted researchers from different fields, especially from the field of luminescence. Recent studies on various luminescent nanomaterials have shown their relevance in dosimetry of ionizing radiations for the measurements of high doses using the Thermoluminescence (TL) technique, where the conventional microcrystalline phosphors saturate. Ion beams have been used for diagnostic and therapeutic purposes due to their favorable profile of dose deposition at the end of the range known as the Bragg peak. While dealing with human beings, doses from these beams need to be measured with great precision and accuracy. Henceforth detailed investigations of suitable thermoluminescent dosimeters (TLD) for dose verification in ion beam irradiation are required. This paper investigates the TL response of nanocrystalline BaSO4 doped with Eu to 85 MeV carbon beam. The synthesis was done using Co-precipitation technique by mixing Barium chloride and ammonium sulphate solutions. To investigate the crystallinity and particle size, analytical techniques such as X-ray diffraction (XRD) and Transmission electron microscopy (TEM) were used which revealed the average particle sizes to 45 nm with orthorhombic structure. Samples in pellet form were irradiated by 85 MeV carbon beam in the fluence range of 1X1010-5X1013. TL glow curves of the irradiated samples show two prominent glow peaks at around 460 K and 495 K. The TL response is linear up to 1X1013 fluence after which saturation was observed. The wider linear TL response of nanocrystalline BaSO4: Eu and low fading make it a superior candidate as a dosimeter to be used for detecting the doses of carbon beam.Keywords: radiation, dosimetry, carbon ions, thermoluminescence
Procedia PDF Downloads 2883893 Communication Strategies of Russian-English Asymmetric Bilinguals Given Insufficient Language Faculty
Authors: Varvara Tyurina
Abstract:
In the age of globalization Internet communication as a new format of interactions have become an integral part of our daily routine. Internet environment allows for new conditions and provides participants to a communication act with extra communication tools which can be used on Internet forums or in chat rooms. As a result communicants tend to alternate their behavior patterns in contrast to those practiced in live communication. It is not yet clear which communication strategies participants to Internet communication abide by and what determines their choices. Given the continually changing environment of a forum or a chat the behavior of a communicant can be interpreted in terms of autopoiesis theory which sees adaptation as the major tool for coexistence between the living system and its niche. Each communication act is seen as interaction between the communicant (i.e. the living system) and the overall environment of the forum (i.e. the niche) rather than one particular interlocutor. When communicating via the Internet participants are believed to aim at reaching a balance between themselves and the environment of a forum or a chat. The research focuses on unveiling the adaptation strategies employed by a communicant in particular cases and looks into the reasons they are employed. There is a correlation between language faculty of the communicants and the strategies they opt for when communicating on Internet forums and in chat rooms. The research included an experiment with a sample of Russian-English asymmetric bilinguals aged 16-25. Respondents were given two texts of equivalent contents, but of different language complexity. They had to respond to the texts as if they were making a reciprocal comment at a forum. It has been revealed that when communicants realize that their language faculty is not sufficient to understand the initial text they tend to amend their communication strategy in order to maintain the balance with the niche (remain involved in the communication). Most common strategies for responding to a difficult-to-understand text were self-presentation, veiling poor language faculty and response evasion. The research has so far focused on a very narrow aspect of correlation between language faculty and communication behavior, namely the syntactic and lexicological complexity of initial texts. It is essential to conduct a series of experiments that dwell on other characteristics of the texts to determine the range of cases when language faculty determines the choice of adaptation strategy.Keywords: adaptation, communication strategies, internet communication, verbal interaction, autopoiesis theory
Procedia PDF Downloads 3623892 Brain-Computer Interface Based Real-Time Control of Fixed Wing and Multi-Rotor Unmanned Aerial Vehicles
Authors: Ravi Vishwanath, Saumya Kumaar, S. N. Omkar
Abstract:
Brain-computer interfacing (BCI) is a technology that is almost four decades old, and it was developed solely for the purpose of developing and enhancing the impact of neuroprosthetics. However, in the recent times, with the commercialization of non-invasive electroencephalogram (EEG) headsets, the technology has seen a wide variety of applications like home automation, wheelchair control, vehicle steering, etc. One of the latest developed applications is the mind-controlled quadrotor unmanned aerial vehicle. These applications, however, do not require a very high-speed response and give satisfactory results when standard classification methods like Support Vector Machine (SVM) and Multi-Layer Perceptron (MLPC). Issues are faced when there is a requirement for high-speed control in the case of fixed-wing unmanned aerial vehicles where such methods are rendered unreliable due to the low speed of classification. Such an application requires the system to classify data at high speeds in order to retain the controllability of the vehicle. This paper proposes a novel method of classification which uses a combination of Common Spatial Paradigm and Linear Discriminant Analysis that provides an improved classification accuracy in real time. A non-linear SVM based classification technique has also been discussed. Further, this paper discusses the implementation of the proposed method on a fixed-wing and VTOL unmanned aerial vehicles.Keywords: brain-computer interface, classification, machine learning, unmanned aerial vehicles
Procedia PDF Downloads 2833891 Perspective Shifting in the Elicited Language Production Can Defy with Aging
Authors: Tuyuan Cheng
Abstract:
As we age, many things become more difficult. Among the abilities are the linguistic and cognitive ones. Competing theories have shown that these two functions could diminish together or that one is selectively affected by the other. In other words, some proposes aging affects sentence production in the same way it affects sentence comprehension and other cognitive functions, while some argues it does not.To address this question, the current investigation is conducted into the critical aspect of sentences as well as cognitive abilities – the syntactic complexity and the number of perspective shifts being contained in the elicited production. Healthy non-pathological aging is often characterized by a cognitive and neural decline in a number of cognitive abilities. Although the language is assumed to be of the more stable domain, a variety of findings in the cognitive aging literature would suggest otherwise. Older adults often show deficits in language production and multiple aspects of comprehension. Nevertheless, while some age differences likely reflect cognitive decline, others might reflect changes in communicative goals, and some even display cognitive advantages. In the domain of language processing, research efforts have been made in tests that probed a variety of communicative abilities. In general, there exists a distinction: Comprehension seems to be selectively unaffected, while production does not. The current study raises a novel question and investigates whether aging affects the production of relative clauses (RCs) under the cognitive factor of perspective shifts. Based on Perspective Hypothesis (MacWhinney, 2000, 2005), our cognitive processes build upon a fundamental system of perspective-taking, and language provides a series of cues to facilitate the construction and shifting of perspectives. These cues include a wide variety of constructions, including RCs structures. In this regard, linguistic complexity can be determined by the number of perspective shifts, and the processing difficulties of RCs can be interpreted within the theory of perspective shifting. Two experiments were conducted to study language production under controlled conditions. In Experiment 1, older healthy participants were tested on standard measures of cognitive aging, including MMSE (Mini-Mental State Examination), ToMI-2 (a simplified Theory of Mind Inventory-2), and a perspective-shifting comprehension task programmed with E-Prime. The results were analyzed to examine if/how they are correlated with aging people’s subsequent production data. In Experiment 2, the production profile of differing RCs, SRC vs. ORC, were collected with healthy aging participants who perform a picture elicitation task. Variable containing 0, 1, or 2 perspective shifts were juxtaposed respectively to the pictures and counterbalanced presented for elicitation. In parallel, a controlled group of young adults were recruited to examine the linguistic and cognitive abilities in question. The results lead us to the discussion whetheraging affects RCs production in a manner determined by its semantic structure or the number of perspective shifts it contains or the status of participants’ mental understanding. The major findingsare: (1) Elders’ production on Chinese RCtypes did not display intrinsic difficulty asymmetry. (2) RC types (the linguistic structural features) and the cognitiveperspective shifts jointly play important roles in the elders’ RCproduction. (3) The production of RC may defy the aging in the case offlexibly preserved cognitive ability.Keywords: cognition aging, perspective hypothesis, perspective shift, relative clauses, sentence complexity
Procedia PDF Downloads 1183890 "Project" Approach in Urban: A Response to Uncertainty
Authors: Mouhoubi Nedjima, Sassi Boudemagh Souad
Abstract:
In this paper, we will try to demonstrate the importance of the project approach in the urban to deal with uncertainty, the importance of the involvement of all stakeholders in the urban project process and that the absence of an actor can lead to project failure but also the importance of the urban project management. These points are handled through the following questions: Does the urban adhere to the theory of complexity? Does the project approach bring hope and solution to make urban planning "sustainable"? How converging visions of actors for the same project? Is the management of urban project the solution to support the urban project approach?Keywords: strategic planning, project, urban project stakeholders, management
Procedia PDF Downloads 5123889 Performance Based Design of Masonry Infilled Reinforced Concrete Frames for Near-Field Earthquakes Using Energy Methods
Authors: Alok Madan, Arshad K. Hashmi
Abstract:
Performance based design (PBD) is an iterative exercise in which a preliminary trial design of the building structure is selected and if the selected trial design of the building structure does not conform to the desired performance objective, the trial design is revised. In this context, development of a fundamental approach for performance based seismic design of masonry infilled frames with minimum number of trials is an important objective. The paper presents a plastic design procedure based on the energy balance concept for PBD of multi-story multi-bay masonry infilled reinforced concrete (R/C) frames subjected to near-field earthquakes. The proposed energy based plastic design procedure was implemented for trial performance based seismic design of representative masonry infilled reinforced concrete frames with various practically relevant distributions of masonry infill panels over the frame elevation. Non-linear dynamic analyses of the trial PBD of masonry infilled R/C frames was performed under the action of near-field earthquake ground motions. The results of non-linear dynamic analyses demonstrate that the proposed energy method is effective for performance based design of masonry infilled R/C frames under near-field as well as far-field earthquakes.Keywords: masonry infilled frame, energy methods, near-fault ground motions, pushover analysis, nonlinear dynamic analysis, seismic demand
Procedia PDF Downloads 2923888 A Corpus-Based Study on the Lexical, Syntactic and Sequential Features across Interpreting Types
Authors: Qianxi Lv, Junying Liang
Abstract:
Among the various modes of interpreting, simultaneous interpreting (SI) is regarded as a ‘complex’ and ‘extreme condition’ of cognitive tasks while consecutive interpreters (CI) do not have to share processing capacity between tasks. Given that SI exerts great cognitive demand, it makes sense to posit that the output of SI may be more compromised than that of CI in the linguistic features. The bulk of the research has stressed the varying cognitive demand and processes involved in different modes of interpreting; however, related empirical research is sparse. In keeping with our interest in investigating the quantitative linguistic factors discriminating between SI and CI, the current study seeks to examine the potential lexical simplification, syntactic complexity and sequential organization mechanism with a self-made inter-model corpus of transcribed simultaneous and consecutive interpretation, translated speech and original speech texts with a total running word of 321960. The lexical features are extracted in terms of the lexical density, list head coverage, hapax legomena, and type-token ratio, as well as core vocabulary percentage. Dependency distance, an index for syntactic complexity and reflective of processing demand is employed. Frequency motif is a non-grammatically-bound sequential unit and is also used to visualize the local function distribution of interpreting the output. While SI is generally regarded as multitasking with high cognitive load, our findings evidently show that CI may impose heavier or taxing cognitive resource differently and hence yields more lexically and syntactically simplified output. In addition, the sequential features manifest that SI and CI organize the sequences from the source text in different ways into the output, to minimize the cognitive load respectively. We reasoned the results in the framework that cognitive demand is exerted both on maintaining and coordinating component of Working Memory. On the one hand, the information maintained in CI is inherently larger in volume compared to SI. On the other hand, time constraints directly influence the sentence reformulation process. The temporal pressure from the input in SI makes the interpreters only keep a small chunk of information in the focus of attention. Thus, SI interpreters usually produce the output by largely retaining the source structure so as to relieve the information from the working memory immediately after formulated in the target language. Conversely, CI interpreters receive at least a few sentences before reformulation, when they are more self-paced. CI interpreters may thus tend to retain and generate the information in a way to lessen the demand. In other words, interpreters cope with the high demand in the reformulation phase of CI by generating output with densely distributed function words, more content words of higher frequency values and fewer variations, simpler structures and more frequently used language sequences. We consequently propose a revised effort model based on the result for a better illustration of cognitive demand during both interpreting types.Keywords: cognitive demand, corpus-based, dependency distance, frequency motif, interpreting types, lexical simplification, sequential units distribution, syntactic complexity
Procedia PDF Downloads 1783887 Time and Cost Prediction Models for Language Classification Over a Large Corpus on Spark
Authors: Jairson Barbosa Rodrigues, Paulo Romero Martins Maciel, Germano Crispim Vasconcelos
Abstract:
This paper presents an investigation of the performance impacts regarding the variation of five factors (input data size, node number, cores, memory, and disks) when applying a distributed implementation of Naïve Bayes for text classification of a large Corpus on the Spark big data processing framework. Problem: The algorithm's performance depends on multiple factors, and knowing before-hand the effects of each factor becomes especially critical as hardware is priced by time slice in cloud environments. Objectives: To explain the functional relationship between factors and performance and to develop linear predictor models for time and cost. Methods: the solid statistical principles of Design of Experiments (DoE), particularly the randomized two-level fractional factorial design with replications. This research involved 48 real clusters with different hardware arrangements. The metrics were analyzed using linear models for screening, ranking, and measurement of each factor's impact. Results: Our findings include prediction models and show some non-intuitive results about the small influence of cores and the neutrality of memory and disks on total execution time, and the non-significant impact of data input scale on costs, although notably impacts the execution time.Keywords: big data, design of experiments, distributed machine learning, natural language processing, spark
Procedia PDF Downloads 1203886 Use of Front-Face Fluorescence Spectroscopy and Multiway Analysis for the Prediction of Olive Oil Quality Features
Authors: Omar Dib, Rita Yaacoub, Luc Eveleigh, Nathalie Locquet, Hussein Dib, Ali Bassal, Christophe B. Y. Cordella
Abstract:
The potential of front-face fluorescence coupled with chemometric techniques, namely parallel factor analysis (PARAFAC) and multiple linear regression (MLR) as a rapid analysis tool to characterize Lebanese virgin olive oils was investigated. Fluorescence fingerprints were acquired directly on 102 Lebanese virgin olive oil samples in the range of 280-540 nm in excitation and 280-700 nm in emission. A PARAFAC model with seven components was considered optimal with a residual of 99.64% and core consistency value of 78.65. The model revealed seven main fluorescence profiles in olive oil and was mainly associated with tocopherols, polyphenols, chlorophyllic compounds and oxidation/hydrolysis products. 23 MLR regression models based on PARAFAC scores were generated, the majority of which showed a good correlation coefficient (R > 0.7 for 12 predicted variables), thus satisfactory prediction performances. Acid values, peroxide values, and Delta K had the models with the highest predictions, with R values of 0.89, 0.84 and 0.81 respectively. Among fatty acids, linoleic and oleic acids were also highly predicted with R values of 0.8 and 0.76, respectively. Factors contributing to the model's construction were related to common fluorophores found in olive oil, mainly chlorophyll, polyphenols, and oxidation products. This study demonstrates the interest of front-face fluorescence as a promising tool for quality control of Lebanese virgin olive oils.Keywords: front-face fluorescence, Lebanese virgin olive oils, multiple Linear regressions, PARAFAC analysis
Procedia PDF Downloads 4523885 Impact of the Electricity Market Prices during the COVID-19 Pandemic on Energy Storage Operation
Authors: Marin Mandić, Elis Sutlović, Tonći Modrić, Luka Stanić
Abstract:
With the restructuring and deregulation of the power system, storage owners, generation companies or private producers can offer their multiple services on various power markets and earn income in different types of markets, such as the day-ahead, real-time, ancillary services market, etc. During the COVID-19 pandemic, electricity prices, as well as ancillary services prices, increased significantly. The optimization of the energy storage operation was performed using a suitable model for simulating the operation of a pumped storage hydropower plant under market conditions. The objective function maximizes the income earned through energy arbitration, regulation-up, regulation-down and spinning reserve services. The optimization technique used for solving the objective function is mixed integer linear programming (MILP). In numerical examples, the pumped storage hydropower plant operation has been optimized considering the already achieved hourly electricity market prices from Nord Pool for the pre-pandemic (2019) and the pandemic (2020 and 2021) years. The impact of the electricity market prices during the COVID-19 pandemic on energy storage operation is shown through the analysis of income, operating hours, reserved capacity and consumed energy for each service. The results indicate the role of energy storage during a significant fluctuation in electricity and services prices.Keywords: electrical market prices, electricity market, energy storage optimization, mixed integer linear programming (MILP) optimization
Procedia PDF Downloads 1733884 Interaction Between Task Complexity and Collaborative Learning on Virtual Patient Design: The Effects on Students’ Performance, Cognitive Load, and Task Time
Authors: Fatemeh Jannesarvatan, Ghazaal Parastooei, Jimmy frerejan, Saedeh Mokhtari, Peter Van Rosmalen
Abstract:
Medical and dental education increasingly emphasizes the acquisition, integration, and coordination of complex knowledge, skills, and attitudes that can be applied in practical situations. Instructional design approaches have focused on using real-life tasks in order to facilitate complex learning in both real and simulated environments. The Four component instructional design (4C/ID) model has become a useful guideline for designing instructional materials that improve learning transfer, especially in health profession education. The objective of this study was to apply the 4C/ID model in the creation of virtual patients (VPs) that dental students can use to practice their clinical management and clinical reasoning skills. The study first explored the context and concept of complication factors and common errors for novices and how they can affect the design of a virtual patient program. The study then selected key dental information and considered the content needs of dental students. The design of virtual patients was based on the 4C/ID model's fundamental principles, which included: Designing learning tasks that reflect real patient scenarios and applying different levels of task complexity to challenge students to apply their knowledge and skills in different contexts. Creating varied learning materials that support students during the VP program and are closely integrated with the learning tasks and students' curricula. Cognitive feedback was provided at different levels of the program. Providing procedural information where students followed a step-by-step process from history taking to writing a comprehensive treatment plan. Four virtual patients were designed using the 4C/ID model's principles, and an experimental design was used to test the effectiveness of the principles in achieving the intended educational outcomes. The 4C/ID model provides an effective framework for designing engaging and successful virtual patients that support the transfer of knowledge and skills for dental students. However, there are some challenges and pitfalls that instructional designers should take into account when developing these educational tools.Keywords: 4C/ID model, virtual patients, education, dental, instructional design
Procedia PDF Downloads 803883 Estimating Algae Concentration Based on Deep Learning from Satellite Observation in Korea
Authors: Heewon Jeong, Seongpyo Kim, Joon Ha Kim
Abstract:
Over the last few tens of years, the coastal regions of Korea have experienced red tide algal blooms, which are harmful and toxic to both humans and marine organisms due to their potential threat. It was accelerated owing to eutrophication by human activities, certain oceanic processes, and climate change. Previous studies have tried to monitoring and predicting the algae concentration of the ocean with the bio-optical algorithms applied to color images of the satellite. However, the accurate estimation of algal blooms remains problems to challenges because of the complexity of coastal waters. Therefore, this study suggests a new method to identify the concentration of red tide algal bloom from images of geostationary ocean color imager (GOCI) which are representing the water environment of the sea in Korea. The method employed GOCI images, which took the water leaving radiances centered at 443nm, 490nm and 660nm respectively, as well as observed weather data (i.e., humidity, temperature and atmospheric pressure) for the database to apply optical characteristics of algae and train deep learning algorithm. Convolution neural network (CNN) was used to extract the significant features from the images. And then artificial neural network (ANN) was used to estimate the concentration of algae from the extracted features. For training of the deep learning model, backpropagation learning strategy is developed. The established methods were tested and compared with the performances of GOCI data processing system (GDPS), which is based on standard image processing algorithms and optical algorithms. The model had better performance to estimate algae concentration than the GDPS which is impossible to estimate greater than 5mg/m³. Thus, deep learning model trained successfully to assess algae concentration in spite of the complexity of water environment. Furthermore, the results of this system and methodology can be used to improve the performances of remote sensing. Acknowledgement: This work was supported by the 'Climate Technology Development and Application' research project (#K07731) through a grant provided by GIST in 2017.Keywords: deep learning, algae concentration, remote sensing, satellite
Procedia PDF Downloads 1833882 Risk-Based Computer Auditing and Measures of Prevention
Authors: Mohammad Hadi Khorashadi Zadeh, Amin Karkon, Seyd Mohammad Reza Mashhoori
Abstract:
the technology of Computer audit played a major role in the progress and prospects of a proper application to improve the quality and efficiency of audit work. But due to the technical complexity and the specific risks of computer audit, it should be shown effective in audit and preventive action. Mainly through research in this paper, we proposes the causes of audit risk in a computer environment and the risk of further proposals for measures to control, to some extent reduce the risk of computer audit and improve the audit quality.Keywords: computer auditing, risk, measures to prevent, information technology
Procedia PDF Downloads 4893881 Analytical Development of a Failure Limit and Iso-Uplift Curves for Eccentrically Loaded Shallow Foundations
Authors: N. Abbas, S. Lagomarsino, S. Cattari
Abstract:
Examining existing experimental results for shallow rigid foundations subjected to vertical centric load (N), accompanied or not with a bending moment (M), two main non-linear mechanisms governing the cyclic response of the soil-foundation system can be distinguished: foundation uplift and soil yielding. A soil-foundation failure limit, is defined as a domain of resistance in the two dimensional (2D) load space (N, M) inside of which lie all the admissible combinations of loads; these latter correspond to a pure elastic, non-linear elastic or plastic behavior of the soil-foundation system, while the points lying on the failure limit correspond to a combination of loads leading to a failure of the soil-foundation system. In this study, the proposed resistance domain is constructed analytically based on mechanics. Original elastic limit, uplift initiation limit and iso-uplift limits are constructed inside this domain. These limits give a prediction of the mechanisms activated for each combination of loads applied to the foundation. A comparison of the proposed failure limit with experimental tests existing in the literature shows interesting results. Also, the developed uplift initiation limit and iso-uplift curves are confronted with others already proposed in the literature and widely used due to the absence of other alternatives, and remarkable differences are noted, showing evident errors in the past proposals and relevant accuracy for those given in the present work.Keywords: foundation uplift, iso-uplift curves, resistance domain, soil yield
Procedia PDF Downloads 3833880 Nonlinear Passive Shunt for Electroacoustic Absorbers Using Nonlinear Energy Sink
Authors: Diala Bitar, Emmanuel Gourdon, Claude H. Lamarque, Manuel Collet
Abstract:
Acoustic absorber devices play an important role reducing the noise at the propagation and reception paths. An electroacoustic absorber consists of a loudspeaker coupled to an electric shunt circuit, where the membrane is playing the role of an absorber/reflector of sound. Although the use of linear shunt resistors at the transducer terminals, has shown to improve the performances of the dynamical absorbers, it is nearly efficient in a narrow frequency band. Therefore, and since nonlinear phenomena are promising for their ability to absorb the vibrations and sound on a larger frequency range, we propose to couple a nonlinear electric shunt circuit at the loudspeaker terminals. Then, the equivalent model can be described by a 2 degrees of freedom system, consisting of a primary linear oscillator describing the dynamics of the loudspeaker membrane, linearly coupled to a cubic nonlinear energy sink (NES). The system is analytically treated for the case of 1:1 resonance, using an invariant manifold approach at different time scales. The proposed methodology enables us to detect the equilibrium points and fold singularities at the first slow time scales, providing a predictive tool to design the nonlinear circuit shunt during the energy exchange process. The preliminary results are promising; a significant improvement of acoustic absorption performances are obtained.Keywords: electroacoustic absorber, multiple-time-scale with small finite parameter, nonlinear energy sink, nonlinear passive shunt
Procedia PDF Downloads 2203879 Characteristics and Flight Test Analysis of a Fixed-Wing UAV with Hover Capability
Authors: Ferit Çakıcı, M. Kemal Leblebicioğlu
Abstract:
In this study, characteristics and flight test analysis of a fixed-wing unmanned aerial vehicle (UAV) with hover capability is analyzed. The base platform is chosen as a conventional airplane with throttle, ailerons, elevator and rudder control surfaces, that inherently allows level flight. Then this aircraft is mechanically modified by the integration of vertical propellers as in multi rotors in order to provide hover capability. The aircraft is modeled using basic aerodynamical principles and linear models are constructed utilizing small perturbation theory for trim conditions. Flight characteristics are analyzed by benefiting from linear control theory’s state space approach. Distinctive features of the aircraft are discussed based on analysis results with comparison to conventional aircraft platform types. A hybrid control system is proposed in order to reveal unique flight characteristics. The main approach includes design of different controllers for different modes of operation and a hand-over logic that makes flight in an enlarged flight envelope viable. Simulation tests are performed on mathematical models that verify asserted algorithms. Flight tests conducted in real world revealed the applicability of the proposed methods in exploiting fixed-wing and rotary wing characteristics of the aircraft, which provide agility, survivability and functionality.Keywords: flight test, flight characteristics, hybrid aircraft, unmanned aerial vehicle
Procedia PDF Downloads 3293878 Revolutionary Solutions for Modeling and Visualization of Complex Software Systems
Abstract:
Existing software modeling and visualization approaches using UML are outdated, which are outcomes of reductionism and the superposition principle that the whole of a system is the sum of its parts, so that with them all tasks of software modeling and visualization are performed linearly, partially, and locally. This paper introduces revolutionary solutions for modeling and visualization of complex software systems, which make complex software systems much easy to understand, test, and maintain. The solutions are based on complexity science, offering holistic, automatic, dynamic, virtual, and executable approaches about thousand times more efficient than the traditional ones.Keywords: complex systems, software maintenance, software modeling, software visualization
Procedia PDF Downloads 4013877 Parameters Affecting the Elasto-Plastic Behavior of Outrigger Braced Walls to Earthquakes
Authors: T. A. Sakr, Hanaa E. Abd-El-Mottaleb
Abstract:
Outrigger-braced wall systems are commonly used to provide high rise buildings with the required lateral stiffness for wind and earthquake resistance. The existence of outriggers adds to the stiffness and strength of walls as reported by several studies. The effects of different parameters on the elasto-plastic dynamic behavior of outrigger-braced wall systems to earthquakes are investigated in this study. Parameters investigated include outrigger stiffness, concrete strength, and reinforcement arrangement as the main design parameters in wall design. In addition to being significant to the wall behavior, such parameters may lead to the change of failure mode and the delay of crack propagation and consequently failure as the wall is excited by earthquakes. Bi-linear stress-strain relation for concrete with limited tensile strength and truss members with bi-linear stress-strain relation for reinforcement were used in the finite element analysis of the problem. The famous earthquake record, El-Centro, 1940 is used in the study. Emphasis was given to the lateral drift, normal stresses and crack pattern as behavior controlling determinants. Results indicated significant effect of the studied parameters such that stiffer outrigger, higher grade concrete and concentrating the reinforcement at wall edges enhance the behavior of the system. Concrete stresses and cracking behavior are sigbificantly enhanced while lesser drift improvements are observed.Keywords: outrigger, shear wall, earthquake, nonlinear
Procedia PDF Downloads 2833876 Preparation of Pegylated Interferon Alpha-2b with High Antiviral Activity Using Linear 20 KDa Polyethylene Glycol Derivative
Authors: Ehab El-Dabaa, Omnia Ali, Mohamed Abd El-Hady, Ahmed Osman
Abstract:
Recombinant human interferon alpha 2 (rhIFN-α2) is FDA approved for treatment of some viral and malignant diseases. Approved pegylated rhIFN-α2 drugs have highly improved pharmacokinetics, pharmacodynamics and therapeutic efficiency compared to native protein. In this work, we studied the pegylation of purified properly refolded rhIFN-α2b using linear 20kDa PEG-NHS (polyethylene glycol- N-hydroxysuccinimidyl ester) to prepare pegylated rhIFN-α2b with high stability and activity. The effect of different parameters like rhIFN-α2b final concentration, pH, rhIFN-α2b/PEG molar ratios and reaction time on the efficiency of pegylation (high percentage of monopegylated rhIFN-α2b) have been studied in small scale (100µl) pegylation reaction trials. Study of the percentages of different components of these reactions (mono, di, polypegylated rhIFN-α2b and unpegylated rhIFN-α2b) indicated that 2h is optimum time to complete the reaction. The pegylation efficiency increased at pH 8 (57.9%) by reducing the protein concentration to 1mg/ml and reducing the rhIFN-α2b/PEG ratio to 1:2. Using larger scale pegylation reaction (65% pegylation efficiency), ion exchange chromatography method has been optimized to prepare and purify the monopegylated rhIFN-α2b with high purity (96%). The prepared monopegylated rhIFN-α2b had apparent Mwt of approximately 65 kDa and high in vitro antiviral activity (2.1x10⁷ ± 0.8 x10⁷ IU/mg). Although it retained approximately 8.4 % of the antiviral activity of the unpegylated rhIFN-α2b, its activity is high compared to other pegylated rhIFN-α2 developed by using similar approach or higher molecular weight branched PEG.Keywords: antiviral activity, rhIFN-α2b, pegylation, pegylation efficiency
Procedia PDF Downloads 1773875 Factors Affecting Slot Machine Performance in an Electronic Gaming Machine Facility
Authors: Etienne Provencal, David L. St-Pierre
Abstract:
A facility exploiting only electronic gambling machines (EGMs) opened in 2007 in Quebec City, Canada under the name of Salons de Jeux du Québec (SdjQ). This facility is one of the first worldwide to rely on that business model. This paper models the performance of such EGMs. The interest from a managerial point of view is to identify the variables that can be controlled or influenced so that a comprehensive model can help improve the overall performance of the business. The EGM individual performance model contains eight different variables under study (Game Title, Progressive jackpot, Bonus Round, Minimum Coin-in, Maximum Coin-in, Denomination, Slant Top and Position). Using data from Quebec City’s SdjQ, a linear regression analysis explains 90.80% of the EGM performance. Moreover, results show a behavior slightly different than that of a casino. The addition of GameTitle as a factor to predict the EGM performance is one of the main contributions of this paper. The choice of the game (GameTitle) is very important. Games having better position do not have significantly better performance than games located elsewhere on the gaming floor. Progressive jackpots have a positive and significant effect on the individual performance of EGMs. The impact of BonusRound on the dependent variable is significant but negative. The effect of Denomination is significant but weakly negative. As expected, the Language of an EGMS does not impact its individual performance. This paper highlights some possible improvements by indicating which features are performing well. Recommendations are given to increase the performance of the EGMs performance.Keywords: EGM, linear regression, model prediction, slot operations
Procedia PDF Downloads 2553874 Comprehensive Feature Extraction for Optimized Condition Assessment of Fuel Pumps
Authors: Ugochukwu Ejike Akpudo, Jank-Wook Hur
Abstract:
The increasing demand for improved productivity, maintainability, and reliability has prompted rapidly increasing research studies on the emerging condition-based maintenance concept- Prognostics and health management (PHM). Varieties of fuel pumps serve critical functions in several hydraulic systems; hence, their failure can have daunting effects on productivity, safety, etc. The need for condition monitoring and assessment of these pumps cannot be overemphasized, and this has led to the uproar in research studies on standard feature extraction techniques for optimized condition assessment of fuel pumps. By extracting time-based, frequency-based and the more robust time-frequency based features from these vibrational signals, a more comprehensive feature assessment (and selection) can be achieved for a more accurate and reliable condition assessment of these pumps. With the aid of emerging deep classification and regression algorithms like the locally linear embedding (LLE), we propose a method for comprehensive condition assessment of electromagnetic fuel pumps (EMFPs). Results show that the LLE as a comprehensive feature extraction technique yields better feature fusion/dimensionality reduction results for condition assessment of EMFPs against the use of single features. Also, unlike other feature fusion techniques, its capabilities as a fault classification technique were explored, and the results show an acceptable accuracy level using standard performance metrics for evaluation.Keywords: electromagnetic fuel pumps, comprehensive feature extraction, condition assessment, locally linear embedding, feature fusion
Procedia PDF Downloads 1173873 Some Inequalities Related with Starlike Log-Harmonic Mappings
Authors: Melike Aydoğan, Dürdane Öztürk
Abstract:
Let H(D) be the linear space of all analytic functions defined on the open unit disc. A log-harmonic mappings is a solution of the nonlinear elliptic partial differential equation where w(z) ∈ H(D) is second dilatation such that |w(z)| < 1 for all z ∈ D. The aim of this paper is to define some inequalities of starlike logharmonic functions of order α(0 ≤ α ≤ 1).Keywords: starlike log-harmonic functions, univalent functions, distortion theorem
Procedia PDF Downloads 5253872 Stability-Indicating High-Performance Thin-Layer Chromatography Method for Estimation of Naftopidil
Authors: P. S. Jain, K. D. Bobade, S. J. Surana
Abstract:
A simple, selective, precise and Stability-indicating High-performance thin-layer chromatographic method for analysis of Naftopidil both in a bulk and in pharmaceutical formulation has been developed and validated. The method employed, HPTLC aluminium plates precoated with silica gel as the stationary phase. The solvent system consisted of hexane: ethyl acetate: glacial acetic acid (4:4:2 v/v). The system was found to give compact spot for Naftopidil (Rf value of 0.43±0.02). Densitometric analysis of Naftopidil was carried out in the absorbance mode at 253 nm. The linear regression analysis data for the calibration plots showed good linear relationship with r2=0.999±0.0001 with respect to peak area in the concentration range 200-1200 ng per spot. The method was validated for precision, recovery and robustness. The limits of detection and quantification were 20.35 and 61.68 ng per spot, respectively. Naftopidil was subjected to acid and alkali hydrolysis, oxidation and thermal degradation. The drug undergoes degradation under acidic, basic, oxidation and thermal conditions. This indicates that the drug is susceptible to acid, base, oxidation and thermal conditions. The degraded product was well resolved from the pure drug with significantly different Rf value. Statistical analysis proves that the method is repeatable, selective and accurate for the estimation of investigated drug. The proposed developed HPTLC method can be applied for identification and quantitative determination of Naftopidil in bulk drug and pharmaceutical formulation.Keywords: naftopidil, HPTLC, validation, stability, degradation
Procedia PDF Downloads 4003871 Vehicle Routing Problem Considering Alternative Roads under Triple Bottom Line Accounting
Authors: Onur Kaya, Ilknur Tukenmez
Abstract:
In this study, we consider vehicle routing problems on networks with alternative direct links between nodes, and we analyze a multi-objective problem considering the financial, environmental and social objectives in this context. In real life, there might exist several alternative direct roads between two nodes, and these roads might have differences in terms of their lengths and durations. For example, a road might be shorter than another but might require longer time due to traffic and speed limits. Similarly, some toll roads might be shorter or faster but require additional payment, leading to higher costs. We consider such alternative links in our problem and develop a mixed integer linear programming model that determines which alternative link to use between two nodes, in addition to determining the optimal routes for different vehicles, depending on the model objectives and constraints. We consider the minimum cost routing as the financial objective for the company, minimizing the CO2 emissions and gas usage as the environmental objectives, and optimizing the driver working conditions/working hours, and minimizing the risks of accidents as the social objectives. With these objective functions, we aim to determine which routes, and which alternative links should be used in addition to the speed choices on each link. We discuss the results of the developed vehicle routing models and compare their results depending on the system parameters.Keywords: vehicle routing, alternative links between nodes, mixed integer linear programming, triple bottom line accounting
Procedia PDF Downloads 4073870 A Hybrid Classical-Quantum Algorithm for Boundary Integral Equations of Scattering Theory
Authors: Damir Latypov
Abstract:
A hybrid classical-quantum algorithm to solve boundary integral equations (BIE) arising in problems of electromagnetic and acoustic scattering is proposed. The quantum speed-up is due to a Quantum Linear System Algorithm (QLSA). The original QLSA of Harrow et al. provides an exponential speed-up over the best-known classical algorithms but only in the case of sparse systems. Due to the non-local nature of integral operators, matrices arising from discretization of BIEs, are, however, dense. A QLSA for dense matrices was introduced in 2017. Its runtime as function of the system's size N is bounded by O(√Npolylog(N)). The run time of the best-known classical algorithm for an arbitrary dense matrix scales as O(N².³⁷³). Instead of exponential as in case of sparse matrices, here we have only a polynomial speed-up. Nevertheless, sufficiently high power of this polynomial, ~4.7, should make QLSA an appealing alternative. Unfortunately for the QLSA, the asymptotic separability of the Green's function leads to high compressibility of the BIEs matrices. Classical fast algorithms such as Multilevel Fast Multipole Method (MLFMM) take advantage of this fact and reduce the runtime to O(Nlog(N)), i.e., the QLSA is only quadratically faster than the MLFMM. To be truly impactful for computational electromagnetics and acoustics engineers, QLSA must provide more substantial advantage than that. We propose a computational scheme which combines elements of the classical fast algorithms with the QLSA to achieve the required performance.Keywords: quantum linear system algorithm, boundary integral equations, dense matrices, electromagnetic scattering theory
Procedia PDF Downloads 154