Search results for: cloud Computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1360

Search results for: cloud Computing

370 Design of Labview Based DAQ System

Authors: Omar A. A. Shaebi, Matouk M. Elamari, Salaheddin Allid

Abstract:

The Information Computing System of Monitoring (ICSM) for the Research Reactor of Tajoura Nuclear Research Centre (TNRC) stopped working since early 1991. According to the regulations, the computer is necessary to operate the reactor up to its maximum power (10 MW). The fund is secured via IAEA to develop a modern computer based data acquisition system to replace the old computer. This paper presents the development of the Labview based data acquisition system to allow automated measurements using National Instruments Hardware and its labview software. The developed system consists of SCXI 1001 chassis, the chassis house four SCXI 1100 modules each can maintain 32 variables. The chassis is interfaced with the PC using NI PCI-6023 DAQ Card. Labview, developed by National Instruments, is used to run and operate the DAQ System. Labview is graphical programming environment suited for high level design. It allows integrating different signal processing components or subsystems within a graphical framework. The results showed system capabilities in monitoring variables, acquiring and saving data. Plus the capability of the labview to control the DAQ.

Keywords: data acquisition, labview, signal conditioning, national instruments

Procedia PDF Downloads 491
369 Optimisation of Intermodal Transport Chain of Supermarkets on Isle of Wight, UK

Authors: Jingya Liu, Yue Wu, Jiabin Luo

Abstract:

This work investigates an intermodal transportation system for delivering goods from a Regional Distribution Centre to supermarkets on the Isle of Wight (IOW) via the port of Southampton or Portsmouth in the UK. We consider this integrated logistics chain as a 3-echelon transportation system. In such a system, there are two types of transport methods used to deliver goods across the Solent Channel: one is accompanied transport, which is used by most supermarkets on the IOW, such as Spar, Lidl and Co-operative food; the other is unaccompanied transport, which is used by Aldi. Five transport scenarios are studied based on different transport modes and ferry routes. The aim is to determine an optimal delivery plan for supermarkets of different business scales on IOW, in order to minimise the total running cost, fuel consumptions and carbon emissions. The problem is modelled as a vehicle routing problem with time windows and solved by genetic algorithm. The computing results suggested that accompanied transport is more cost efficient for small and medium business-scale supermarket chains on IOW, while unaccompanied transport has the potential to improve the efficiency and effectiveness of large business scale supermarket chains.

Keywords: genetic algorithm, intermodal transport system, Isle of Wight, optimization, supermarket

Procedia PDF Downloads 366
368 Analysis of Determinate and Indeterminate Structures: Applications of Non-Economic Structure

Authors: Toral Khalpada, Kanhai Joshi

Abstract:

Generally, constructions of structures built in India are indeterminate structures. The purpose of this study is to investigate the application of a structure that is proved to be non-economical. The testing practice involves the application of different types of loads on both, determinate and indeterminate structure by computing it on a software system named Staad and also inspecting them practically on the construction site, analyzing the most efficient structure and diagnosing the utilization of the structure which is not so beneficial as compared to other. Redundant structures (indeterminate structure) are found to be more reasonable. All types of loads were applied on the beams of both determinate and indeterminate structures parallelly on the software and the same was done on the site practically which proved that maximum stresses in statically indeterminate structures are generally lower than those in comparable determinate structures. These structures are found to have higher stiffness resulting in lesser deformations so indeterminate structures are economical and are better than determinate structures to use for construction. On the other hand, statically determinate structures have the benefit of not producing stresses because of temperature changes. Therefore, our study tells that indeterminate structure is more beneficial but determinate structure also has used as it can be used in many areas; it can be used for the construction of two hinged arch bridges where two supports are sufficient and where there is no need for expensive indeterminate structure. Further investigation is needed to contrive more implementation of the determinate structure.

Keywords: construction, determinate structure, indeterminate structure, stress

Procedia PDF Downloads 219
367 Spiking Behavior in Memristors with Shared Top Electrode Configuration

Authors: B. Manoj Kumar, C. Malavika, E. S. Kannan

Abstract:

The objective of this study is to investigate the switching behavior of two vertically aligned memristors connected by a shared top electrode, a configuration that significantly deviates from the conventional single oxide layer sandwiched between two electrodes. The device is fabricated by bridging copper electrodes with mechanically exfoliated van der Waals metal (specifically tantalum disulfide and tantalum diselenide). The device demonstrates threshold-switching behavior in its I-V characteristics. When the input voltage signal is ramped with voltages below the threshold, the output current shows spiking behavior, resembling integrated and firing actions without extra circuitry. We also investigated the self-reset behavior of the device. Using a continuous constant voltage bias, we activated the device to the firing state. After removing the bias and reapplying it shortly afterward, the current returned to its initial state. This indicates that the device can spontaneously return to its resting state. The outcome of this investigation offers a fresh perspective on memristor-based device design and an efficient method to construct hardware for neuromorphic computing systems.

Keywords: integrated and firing, memristor, spiking behavior, threshold switching

Procedia PDF Downloads 56
366 Methodology for Developing an Intelligent Tutoring System Based on Marzano’s Taxonomy

Authors: Joaquin Navarro Perales, Ana Lidia Franzoni Velázquez, Francisco Cervantes Pérez

Abstract:

The Mexican educational system faces diverse challenges related with the quality and coverage of education. The development of Intelligent Tutoring Systems (ITS) may help to solve some of them by helping teachers to customize their classes according to the performance of the students in online courses. In this work, we propose the adaptation of a functional ITS based on Bloom’s taxonomy called Sistema de Apoyo Generalizado para la Enseñanza Individualizada (SAGE), to measure student’s metacognition and their emotional response based on Marzano’s taxonomy. The students and the system will share the control over the advance in the course, so they can improve their metacognitive skills. The system will not allow students to get access to subjects not mastered yet. The interaction between the system and the student will be implemented through Natural Language Processing techniques, thus avoiding the use of sensors to evaluate student’s response. The teacher will evaluate student’s knowledge utilization, which is equivalent to the last cognitive level in Marzano’s taxonomy.

Keywords: intelligent tutoring systems, student modelling, metacognition, affective computing, natural language processing

Procedia PDF Downloads 190
365 An Accurate Computation of 2D Zernike Moments via Fast Fourier Transform

Authors: Mohammed S. Al-Rawi, J. Bastos, J. Rodriguez

Abstract:

Object detection and object recognition are essential components of every computer vision system. Despite the high computational complexity and other problems related to numerical stability and accuracy, Zernike moments of 2D images (ZMs) have shown resilience when used in object recognition and have been used in various image analysis applications. In this work, we propose a novel method for computing ZMs via Fast Fourier Transform (FFT). Notably, this is the first algorithm that can generate ZMs up to extremely high orders accurately, e.g., it can be used to generate ZMs for orders up to 1000 or even higher. Furthermore, the proposed method is also simpler and faster than the other methods due to the availability of FFT software and/or hardware. The accuracies and numerical stability of ZMs computed via FFT have been confirmed using the orthogonality property. We also introduce normalizing ZMs with Neumann factor when the image is embedded in a larger grid, and color image reconstruction based on RGB normalization of the reconstructed images. Astonishingly, higher-order image reconstruction experiments show that the proposed methods are superior, both quantitatively and subjectively, compared to the q-recursive method.

Keywords: Chebyshev polynomial, fourier transform, fast algorithms, image recognition, pseudo Zernike moments, Zernike moments

Procedia PDF Downloads 257
364 Realizing Teleportation Using Black-White Hole Capsule Constructed by Space-Time Microstrip Circuit Control

Authors: Mapatsakon Sarapat, Mongkol Ketwongsa, Somchat Sonasang, Preecha Yupapin

Abstract:

The designed and performed preliminary tests on a space-time control circuit using a two-level system circuit with a 4-5 cm diameter microstrip for realistic teleportation have been demonstrated. It begins by calculating the parameters that allow a circuit that uses the alternative current (AC) at a specified frequency as the input signal. A method that causes electrons to move along the circuit perimeter starting at the speed of light, which found satisfaction based on the wave-particle duality. It is able to establish the supersonic speed (faster than light) for the electron cloud in the middle of the circuit, creating a timeline and propulsive force as well. The timeline is formed by the stretching and shrinking time cancellation in the relativistic regime, in which the absolute time has vanished. In fact, both black holes and white holes are created from time signals at the beginning, where the speed of electrons travels close to the speed of light. They entangle together like a capsule until they reach the point where they collapse and cancel each other out, which is controlled by the frequency of the circuit. Therefore, we can apply this method to large-scale circuits such as potassium, from which the same method can be applied to form the system to teleport living things. In fact, the black hole is a hibernation system environment that allows living things to live and travel to the destination of teleportation, which can be controlled from position and time relative to the speed of light. When the capsule reaches its destination, it increases the frequency of the black holes and white holes canceling each other out to a balanced environment. Therefore, life can safely teleport to the destination. Therefore, there must be the same system at the origin and destination, which could be a network. Moreover, it can also be applied to space travel as well. The design system will be tested on a small system using a microstrip circuit system that we can create in the laboratory on a limited budget that can be used in both wired and wireless systems.

Keywords: quantum teleportation, black-white hole, time, timeline, relativistic electronics

Procedia PDF Downloads 70
363 Integrating Distributed Architectures in Highly Modular Reinforcement Learning Libraries

Authors: Albert Bou, Sebastian Dittert, Gianni de Fabritiis

Abstract:

Advancing reinforcement learning (RL) requires tools that are flexible enough to easily prototype new methods while avoiding impractically slow experimental turnaround times. To match the first requirement, the most popular RL libraries advocate for highly modular agent composability, which facilitates experimentation and development. To solve challenging environments within reasonable time frames, scaling RL to large sampling and computing resources has proved a successful strategy. However, this capability has been so far difficult to combine with modularity. In this work, we explore design choices to allow agent composability both at a local and distributed level of execution. We propose a versatile approach that allows the definition of RL agents at different scales through independent, reusable components. We demonstrate experimentally that our design choices allow us to reproduce classical benchmarks, explore multiple distributed architectures, and solve novel and complex environments while giving full control to the user in the agent definition and training scheme definition. We believe this work can provide useful insights to the next generation of RL libraries.

Keywords: deep reinforcement learning, Python, PyTorch, distributed training, modularity, library

Procedia PDF Downloads 78
362 Image Segmentation: New Methods

Authors: Flaurence Benjamain, Michel Casperance

Abstract:

We present in this paper, first, a comparative study of three mathematical theories to achieve the fusion of information sources. This study aims to identify the characteristics inherent in theories of possibilities, belief functions (DST) and plausible and paradoxical reasoning to establish a strategy of choice that allows us to adopt the most appropriate theory to solve a problem of fusion in order, taking into account the acquired information and imperfections that accompany them. Using the new theory of plausible and paradoxical reasoning, also called Dezert-Smarandache Theory (DSmT), to fuse information multi-sources needs, at first step, the generation of the composites events witch is, in general, difficult. Thus, we present in this paper a new approach to construct pertinent paradoxical classes based on gray levels histograms, which also allows to reduce the cardinality of the hyper-powerset. Secondly, we developed a new technique for order and coding generalized focal elements. This method is exploited, in particular, to calculate the cardinality of Dezert and Smarandache. Then, we give an experimentation of classification of a remote sensing image that illustrates the given methods and we compared the result obtained by the DSmT with that resulting from the use of the DST and theory of possibilities.

Keywords: segmentation, image, approach, vision computing

Procedia PDF Downloads 270
361 Understanding the Programming Techniques Using a Complex Case Study to Teach Advanced Object-Oriented Programming

Authors: M. Al-Jepoori, D. Bennett

Abstract:

Teaching Object-Oriented Programming (OOP) as part of a Computing-related university degree is a very difficult task; the road to ensuring that students are actually learning object oriented concepts is unclear, as students often find it difficult to understand the concept of objects and their behavior. This problem is especially obvious in advanced programming modules where Design Pattern and advanced programming features such as Multi-threading and animated GUI are introduced. Looking at the students’ performance at their final year on a university course, it was obvious that the level of students’ understanding of OOP varies to a high degree from one student to another. Students who aim at the production of Games do very well in the advanced programming module. However, the students’ assessment results of the last few years were relatively low; for example, in 2016-2017, the first quartile of marks were as low as 24.5 and the third quartile was 63.5. It is obvious that many students were not confident or competent enough in their programming skills. In this paper, the reasons behind poor performance in Advanced OOP modules are investigated, and a suggested practice for teaching OOP based on a complex case study is described and evaluated.

Keywords: complex programming case study, design pattern, learning advanced programming, object oriented programming

Procedia PDF Downloads 215
360 Content-Based Mammograms Retrieval Based on Breast Density Criteria Using Bidimensional Empirical Mode Decomposition

Authors: Sourour Khouaja, Hejer Jlassi, Nadia Feddaoui, Kamel Hamrouni

Abstract:

Most medical images, and especially mammographies, are now stored in large databases. Retrieving a desired image is considered of great importance in order to find previous similar cases diagnosis. Our method is implemented to assist radiologists in retrieving mammographic images containing breast with similar density aspect as seen on the mammogram. This is becoming a challenge seeing the importance of density criteria in cancer provision and its effect on segmentation issues. We used the BEMD (Bidimensional Empirical Mode Decomposition) to characterize the content of images and Euclidean distance measure similarity between images. Through the experiments on the MIAS mammography image database, we confirm that the results are promising. The performance was evaluated using precision and recall curves comparing query and retrieved images. Computing recall-precision proved the effectiveness of applying the CBIR in the large mammographic image databases. We found a precision of 91.2% for mammography with a recall of 86.8%.

Keywords: BEMD, breast density, contend-based, image retrieval, mammography

Procedia PDF Downloads 227
359 A Strategy to Oil Production Placement Zones Based on Maximum Closeness

Authors: Waldir Roque, Gustavo Oliveira, Moises Santos, Tatiana Simoes

Abstract:

Increasing the oil recovery factor of an oil reservoir has been a concern of the oil industry. Usually, the production placement zones are defined after some analysis of geological and petrophysical parameters, being the rock porosity, permeability and oil saturation of fundamental importance. In this context, the determination of hydraulic flow units (HFUs) renders an important step in the process of reservoir characterization since it may provide specific regions in the reservoir with similar petrophysical and fluid flow properties and, in particular, techniques supporting the placement of production zones that favour the tracing of directional wells. A HFU is defined as a representative volume of a total reservoir rock in which petrophysical and fluid flow properties are internally consistent and predictably distinct of other reservoir rocks. Technically, a HFU is characterized as a rock region that exhibit flow zone indicator (FZI) points lying on a straight line of the unit slope. The goal of this paper is to provide a trustful indication for oil production placement zones for the best-fit HFUs. The FZI cloud of points can be obtained from the reservoir quality index (RQI), a function of effective porosity and permeability. Considering log and core data the HFUs are identified and using the discrete rock type (DRT) classification, a set of connected cell clusters can be found and by means a graph centrality metric, the maximum closeness (MaxC) cell is obtained for each cluster. Considering the MaxC cells as production zones, an extensive analysis, based on several oil recovery factor and oil cumulative production simulations were done for the SPE Model 2 and the UNISIM-I-D synthetic fields, where the later was build up from public data available from the actual Namorado Field, Campos Basin, in Brazil. The results have shown that the MaxC is actually technically feasible and very reliable as high performance production placement zones.

Keywords: hydraulic flow unit, maximum closeness centrality, oil production simulation, production placement zone

Procedia PDF Downloads 321
358 Movement of the Viscous Elastic Fixed Vertically Located Cylinder in Liquid with the Free Surface Under the Influence of Waves

Authors: T. J. Hasanova, C. N. Imamalieva

Abstract:

The problem about the movement of the rigid cylinder keeping the vertical position under the influence of running superficial waves in a liquid is considered. The indignation of a falling wave caused by the presence of the cylinder which moves is thus considered. Special decomposition on a falling harmonious wave is used. The problem dares an operational method. For a finding of the original decision, Considering that the image denominator represents a tabular function, Voltaire's integrated equation of the first sort which dares a numerical method is used. Cylinder movement in the continuous environment under the influence of waves is considered in work. Problems are solved by an operational method, thus originals of required functions are looked for by the numerical definition of poles of combinations of transcendental functions and calculation of not own integrals. Using specificity of a task below, Decisions are under construction the numerical solution of the integrated equation of Volter of the first sort that does not create computing problems of the complex roots of transcendental functions connected with search.

Keywords: rigid cylinder, linear interpolation, fluctuations, Voltaire's integrated equation, harmonious wave

Procedia PDF Downloads 316
357 Multifractal Behavior of the Perturbed Cerbelli-Giona Map: Numerical Computation of ω-Measure

Authors: Ibrahim Alsendid, Rob Sturman, Benjamin Sharp

Abstract:

In this paper, we consider a family of 2-dimensional nonlinear area-preserving transformations on the torus. A single parameter η varies between 0 and 1, taking the transformation from a hyperbolic toral automorphism to the “Cerbelli-Giona” map, a system known to exhibit multifractal properties. Here we study the multifractal properties of the family of maps. We apply a box-counting method by defining a grid of boxes Bi(δ), where i is the index and δ is the size of the boxes, to quantify the distribution of stable and unstable manifolds of the map. When the parameter is in the range 0.51< η <0.58 and 0.68< η <1 the map is ergodic; i.e., the unstable and stable manifolds eventually cover the whole torus, although not in a uniform distribution. For accurate numerical results, we require correspondingly accurate construction of the stable and unstable manifolds. Here we use the piecewise linearity of the map to achieve this, by computing the endpoints of line segments that define the global stable and unstable manifolds. This allows the generalized fractal dimension Dq, and spectrum of dimensions f(α), to be computed with accuracy. Finally, the intersection of the unstable and stable manifold of the map will be investigated and compared with the distribution of periodic points of the system.

Keywords: Discrete-time dynamical systems, Fractal geometry, Multifractal behaviour of the Perturbed map, Multifractal of Dynamical systems

Procedia PDF Downloads 206
356 Debris Flow Mapping Using Geographical Information System Based Model and Geospatial Data in Middle Himalayas

Authors: Anand Malik

Abstract:

The Himalayas with high tectonic activities poses a great threat to human life and property. Climate change is another reason which triggering extreme events multiple fold effect on high mountain glacial environment, rock falls, landslides, debris flows, flash flood and snow avalanches. One such extreme event of cloud burst along with breach of moraine dammed Chorabri Lake occurred from June 14 to June 17, 2013, triggered flooding of Saraswati and Mandakini rivers in the Kedarnath Valley of Rudraprayag district of Uttrakhand state of India. As a result, huge volume of water with its high velocity created a catastrophe of the century, which resulted into loss of large number of human/animals, pilgrimage, tourism, agriculture and property. Thus a comprehensive assessment of debris flow hazards requires GIS-based modeling using numerical methods. The aim of present study is to focus on analysis and mapping of debris flow movements using geospatial data with flow-r (developed by team at IGAR, University of Lausanne). The model is based on combined probabilistic and energetic algorithms for the assessment of spreading of flow with maximum run out distances. Aster Digital Elevation Model (DEM) with 30m x 30m cell size (resolution) is used as main geospatial data for preparing the run out assessment, while Landsat data is used to analyze land use land cover change in the study area. The results of the study area show that model can be applied with great accuracy as the model is very useful in determining debris flow areas. The results are compared with existing available landslides/debris flow maps. ArcGIS software is used in preparing run out susceptibility maps which can be used in debris flow mitigation and future land use planning.

Keywords: debris flow, geospatial data, GIS based modeling, flow-R

Procedia PDF Downloads 266
355 Enhancing Healthcare Data Protection and Security

Authors: Joseph Udofia, Isaac Olufadewa

Abstract:

Everyday, the size of Electronic Health Records data keeps increasing as new patients visit health practitioner and returning patients fulfil their appointments. As these data grow, so is their susceptibility to cyber-attacks from criminals waiting to exploit this data. In the US, the damages for cyberattacks were estimated at $8 billion (2018), $11.5 billion (2019) and $20 billion (2021). These attacks usually involve the exposure of PII. Health data is considered PII, and its exposure carry significant impact. To this end, an enhancement of Health Policy and Standards in relation to data security, especially among patients and their clinical providers, is critical to ensure ethical practices, confidentiality, and trust in the healthcare system. As Clinical accelerators and applications that contain user data are used, it is expedient to have a review and revamp of policies like the Payment Card Industry Data Security Standard (PCI DSS), the Health Insurance Portability and Accountability Act (HIPAA), the Fast Healthcare Interoperability Resources (FHIR), all aimed to ensure data protection and security in healthcare. FHIR caters for healthcare data interoperability, FHIR caters to healthcare data interoperability, as data is being shared across different systems from customers to health insurance and care providers. The astronomical cost of implementation has deterred players in the space from ensuring compliance, leading to susceptibility to data exfiltration and data loss on the security accuracy of protected health information (PHI). Though HIPAA hones in on the security accuracy of protected health information (PHI) and PCI DSS on the security of payment card data, they intersect with the shared goal of protecting sensitive information in line with industry standards. With advancements in tech and the emergence of new technology, it is necessary to revamp these policies to address the complexity and ambiguity, cost barrier, and ever-increasing threats in cyberspace. Healthcare data in the wrong hands is a recipe for disaster, and we must enhance its protection and security to protect the mental health of the current and future generations.

Keywords: cloud security, healthcare, cybersecurity, policy and standard

Procedia PDF Downloads 79
354 AI for Efficient Geothermal Exploration and Utilization

Authors: Velimir "monty" Vesselinov, Trais Kliplhuis, Hope Jasperson

Abstract:

Artificial intelligence (AI) is a powerful tool in the geothermal energy sector, aiding in both exploration and utilization. Identifying promising geothermal sites can be challenging due to limited surface indicators and the need for expensive drilling to confirm subsurface resources. Geothermal reservoirs can be located deep underground and exhibit complex geological structures, making traditional exploration methods time-consuming and imprecise. AI algorithms can analyze vast datasets of geological, geophysical, and remote sensing data, including satellite imagery, seismic surveys, geochemistry, geology, etc. Machine learning algorithms can identify subtle patterns and relationships within this data, potentially revealing hidden geothermal potential in areas previously overlooked. To address these challenges, a SIML (Science-Informed Machine Learning) technology has been developed. SIML methods are different from traditional ML techniques. In both cases, the ML models are trained to predict the spatial distribution of an output (e.g., pressure, temperature, heat flux) based on a series of inputs (e.g., permeability, porosity, etc.). The traditional ML (a) relies on deep and wide neural networks (NNs) based on simple algebraic mappings to represent complex processes. In contrast, the SIML neurons incorporate complex mappings (including constitutive relationships and physics/chemistry models). This results in ML models that have a physical meaning and satisfy physics laws and constraints. The prototype of the developed software, called GeoTGO, is accessible through the cloud. Our software prototype demonstrates how different data sources can be made available for processing, executed demonstrative SIML analyses, and presents the results in a table and graphic form.

Keywords: science-informed machine learning, artificial inteligence, exploration, utilization, hidden geothermal

Procedia PDF Downloads 41
353 Evaluating Factors Influencing Information Quality in Large Firms

Authors: B. E. Narkhede, S. K. Mahajan, B. T. Patil, R. D. Raut

Abstract:

Information quality is a major performance measure for an Enterprise Resource Planning (ERP) system of any firm. This study identifies various critical success factors of information quality. The effect of various critical success factors like project management, reengineering efforts and interdepartmental communications on information quality is analyzed using a multiple regression model. Here quantitative data are collected from respondents from various firms through structured questionnaire for assessment of the information quality, project management, reengineering efforts and interdepartmental communications. The validity and reliability of the data are ensured using techniques like factor analysis, computing of Cronbach’s alpha. This study gives relative importance of each of the critical success factors. The findings suggest that among the various factors influencing information quality careful reengineering efforts are the most influencing factor. This paper gives clear insight to managers and practitioners regarding the relative importance of critical success factors influencing information quality so that they can formulate a strategy at the beginning of ERP system implementation.

Keywords: Enterprise Resource Planning (ERP), information systems (IS), multiple regression, information quality

Procedia PDF Downloads 327
352 Tri/Tetra-Block Copolymeric Nanocarriers as a Potential Ocular Delivery System of Lornoxicam: Experimental Design-Based Preparation, in-vitro Characterization and in-vivo Estimation of Transcorneal Permeation

Authors: Alaa Hamed Salama, Rehab Nabil Shamma

Abstract:

Introduction: Polymeric micelles that can deliver drug to intended sites of the eye have attracted much scientific attention recently. The aim of this study was to review the aqueous-based formulation of drug-loaded polymeric micelles that hold significant promise for ophthalmic drug delivery. This study investigated the synergistic performance of mixed polymeric micelles made of linear and branched poly (ethylene oxide)-poly (propylene oxide) for the more effective encapsulation of Lornoxicam (LX) as a hydrophobic model drug. Methods: The co-micellization process of 10% binary systems combining different weight ratios of the highly hydrophilic poloxamers; Synperonic® PE/P84, and Synperonic® PE/F127 and the hydrophobic poloxamine counterpart (Tetronic® T701) was investigated by means of photon correlation spectroscopy and cloud point. The drug-loaded micelles were tested for their solubilizing capacity towards LX. Results: Results showed a sharp solubility increase from 0.46 mg/ml up to more than 4.34 mg/ml, representing about 136-fold increase. Optimized formulation was selected to achieve maximum drug solubilizing power and clarity with lowest possible particle size. The optimized formulation was characterized by 1HNMR analysis which revealed complete encapsulation of the drug within the micelles. Further investigations by histopathological and confocal laser studies revealed the non-irritant nature and good corneal penetrating power of the proposed nano-formulation. Conclusion: LX-loaded polymeric nanomicellar formulation was fabricated allowing easy application of the drug in the form of clear eye drops that do not cause blurred vision or discomfort, thus achieving high patient compliance.

Keywords: confocal laser scanning microscopy, Histopathological studies, Lornoxicam, micellar solubilization

Procedia PDF Downloads 445
351 Simulation Model for Evaluating the Impact of Adaptive E-Learning in the Agricultural Sector

Authors: Maria Nabakooza

Abstract:

Efficient agricultural production is very significant in attaining food sufficiency and security in the world. Many methods are employed by the farmers while attending to their gardens, from manual to mechanized, with Farmers range from subsistence to commercial depending on the motive. This creates a lacuna in the modes of operation in this field as different farmers will take different approaches. This has led to many e-Learning courses being introduced to address this gap. Many e-learning systems use advanced network technologies like Web services, grid computing to promote learning at any time and any place. Many of the existing systems have not inculcated the applicability of the modules in them, the tools to be used and further access whether they are the right tools for the right job. A thorough investigation into the applicability of adaptive eLearning in the agricultural sector has not been taken into account; enabling the assumption that eLearning is the right tool for boosting productivity in this sector. This study comes in to provide an insight and thorough analysis as to whether adaptive eLearning is the right tool for boosting agricultural productivity. The Simulation will adopt a system dynamics modeling approach as a way of examining causality and effect relationship. This study will provide teachers with an insight into which tools they should adopt in designing, and provide students the opportunities to achieve an orderly learning experience through adaptive navigating e-learning services.

Keywords: agriculture, adaptive, e-learning, technology

Procedia PDF Downloads 247
350 University Short Courses Web Application Using ASP.Net

Authors: Ahmed Hariri

Abstract:

E-Learning has become a necessity in the advanced education. It is easier for the student and teacher communication also it speed up the process with less time and less effort. With the progress and the enormous development of distance education must keep up with this age of making a website that allows students and teachers to take all the advantages of advanced education. In this regards, we developed University Short courses web application which is specially designed for Faculty of computing and information technology, Rabigh, Kingdom of Saudi Arabia. After an elaborate review of the current state-of-the-art methods of teaching and learning, we found that instructors deliver extra short courses and workshop to students to enhance the knowledge of students. Moreover, this process is completely manual. The prevailing methods of teaching and learning consume a lot of time; therefore in this context, University Short courses web application will help to make process easy and user friendly. The site allows for students can view and register short courses online conducted by instructor also they can see courses starting dates, finishing date and locations. It also allows the instructor to put things on his courses on the site and see the students enrolled in the study material. Finally, student can print the certificate after finished the course online. ASP.NET, SQLSERVER, JavaScript SQL SERVER Database will use to develop the University Short Courses web application.

Keywords: e-learning, short courses, ASP.NET, SQL SERVER

Procedia PDF Downloads 128
349 Application of the Bionic Wavelet Transform and Psycho-Acoustic Model for Speech Compression

Authors: Chafik Barnoussi, Mourad Talbi, Adnane Cherif

Abstract:

In this paper we propose a new speech compression system based on the application of the Bionic Wavelet Transform (BWT) combined with the psychoacoustic model. This compression system is a modified version of the compression system using a MDCT (Modified Discrete Cosine Transform) filter banks of 32 filters each and the psychoacoustic model. This modification consists in replacing the banks of the MDCT filter banks by the bionic wavelet coefficients which are obtained from the application of the BWT to the speech signal to be compressed. These two methods are evaluated and compared with each other by computing bits before and bits after compression. They are tested on different speech signals and the obtained simulation results show that the proposed technique outperforms the second technique and this in term of compressed file size. In term of SNR, PSNR and NRMSE, the outputs speech signals of the proposed compression system are with acceptable quality. In term of PESQ and speech signal intelligibility, the proposed speech compression technique permits to obtain reconstructed speech signals with good quality.

Keywords: speech compression, bionic wavelet transform, filterbanks, psychoacoustic model

Procedia PDF Downloads 374
348 A Comparative Study of Additive and Nonparametric Regression Estimators and Variable Selection Procedures

Authors: Adriano Z. Zambom, Preethi Ravikumar

Abstract:

One of the biggest challenges in nonparametric regression is the curse of dimensionality. Additive models are known to overcome this problem by estimating only the individual additive effects of each covariate. However, if the model is misspecified, the accuracy of the estimator compared to the fully nonparametric one is unknown. In this work the efficiency of completely nonparametric regression estimators such as the Loess is compared to the estimators that assume additivity in several situations, including additive and non-additive regression scenarios. The comparison is done by computing the oracle mean square error of the estimators with regards to the true nonparametric regression function. Then, a backward elimination selection procedure based on the Akaike Information Criteria is proposed, which is computed from either the additive or the nonparametric model. Simulations show that if the additive model is misspecified, the percentage of time it fails to select important variables can be higher than that of the fully nonparametric approach. A dimension reduction step is included when nonparametric estimator cannot be computed due to the curse of dimensionality. Finally, the Boston housing dataset is analyzed using the proposed backward elimination procedure and the selected variables are identified.

Keywords: additive model, nonparametric regression, variable selection, Akaike Information Criteria

Procedia PDF Downloads 261
347 A Crystal Plasticity Approach to Model Dynamic Strain Aging

Authors: Burak Bal, Demircan Canadinc

Abstract:

Dynamic strain aging (DSA), resulting from the reorientation of C-Mn clusters in the core of dislocations, can provide a strain hardening mechanism. In addition, in Hadfield steel, negative strain rate sensitivity is observed due to the DSA. In our study, we incorporated dynamic strain aging onto crystal plasticity computations to predict the local instabilities and corresponding negative strain rate sensitivity. Specifically, the material response of Hadfield steel was obtained from monotonic and strain-rate jump experiments under tensile loading. The strain rate range was adjusted from 10⁻⁴ to 10⁻¹s ⁻¹. The crystal plasticity modeling of the material response was carried out based on Voce-type hardening law and corresponding Voce hardening parameters were determined. The solute pinning effect of carbon atom was incorporated to crystal plasticity simulations at microscale level by computing the shear stress contribution imposed on an arrested dislocation by carbon atom. After crystal plasticity simulations with modifying hardening rule, which takes into account the contribution of DSA, it was seen that the model successfully predicts both the role of DSA and corresponding strain rate sensitivity.

Keywords: crystal plasticity, dynamic strain aging, Hadfield steel, negative strain rate sensitivity

Procedia PDF Downloads 256
346 Acceleration of Lagrangian and Eulerian Flow Solvers via Graphics Processing Units

Authors: Pooya Niksiar, Ali Ashrafizadeh, Mehrzad Shams, Amir Hossein Madani

Abstract:

There are many computationally demanding applications in science and engineering which need efficient algorithms implemented on high performance computers. Recently, Graphics Processing Units (GPUs) have drawn much attention as compared to the traditional CPU-based hardware and have opened up new improvement venues in scientific computing. One particular application area is Computational Fluid Dynamics (CFD), in which mature CPU-based codes need to be converted to GPU-based algorithms to take advantage of this new technology. In this paper, numerical solutions of two classes of discrete fluid flow models via both CPU and GPU are discussed and compared. Test problems include an Eulerian model of a two-dimensional incompressible laminar flow case and a Lagrangian model of a two phase flow field. The CUDA programming standard is used to employ an NVIDIA GPU with 480 cores and a C++ serial code is run on a single core Intel quad-core CPU. Up to two orders of magnitude speed up is observed on GPU for a certain range of grid resolution or particle numbers. As expected, Lagrangian formulation is better suited for parallel computations on GPU although Eulerian formulation represents significant speed up too.

Keywords: CFD, Eulerian formulation, graphics processing units, Lagrangian formulation

Procedia PDF Downloads 403
345 Model of a Context-Aware Middleware for Mobile Workers

Authors: Esraa Moustafa, Gaetan Rey, Stephane Lavirotte, Jean-Yves Tigli

Abstract:

With the development of Internet of Things and Web of Things, computing becomes more pervasive, invisible and present everywhere. In fact, in our environment, we are surrounded by multiple devices that deliver (web) services that meet the needs of the users. However, the mobility of these devices as the users has important repercussions that challenge software design of these applications because the variability of the environment cannot be anticipated at the design time. Thus, it will be interesting to dynamically discover the environment and adapt the application during its execution to the new contextual conditions. We, therefore, propose a model of a context-aware middleware that can address this issue through a monitoring service that is capable of reasoning and observation channels capable of calculating the context during the runtime. The monitoring service evaluates the pre-defined X-Query predicates in the context manager and uses Prolog to deduce the services needed to respond back. An independent Observation Channel for each different predicate is then dynamically generated by the monitoring service depending on the current state of the environment. Each channel sends its result directly to the context manager which consequently calculates the context based on all the predicates’ results while preserving the reactivity of the self-adaptive system.

Keywords: auto-adaptation, context-awareness, middleware, reasoning engine

Procedia PDF Downloads 244
344 Developing a Green Information Technology Model in Australian Higher-Educational Institutions

Authors: Mahnaz Jafari, Parisa Izadpanahi, Francesco Mancini, Muhammad Qureshi

Abstract:

The advancement in Information Technology (IT) has been an intrinsic element in the developments of the 21st century bringing benefits such as increased economic productivity. However, its widespread application has also been associated with inadvertent negative impacts on society and the environment necessitating selective interventions to mitigate these impacts. This study responded to this need by developing a Green IT Rating Tool (GIRT) for higher education institutions (HEI) in Australia to evaluate the sustainability of IT-related practices from an environmental, social, and economic perspective. Each dimension must be considered equally to achieve sustainability. The development of the GIRT was informed by the views of interviewed IT professionals whose opinions formed the basis of a framework listing Green IT initiatives in order of their importance as perceived by the interviewed professionals. This framework formed the base of the GIRT, which identified Green IT initiatives (such as videoconferencing as a substitute for long-distance travel) and the associated weighting of each practice. The proposed sustainable Green IT model could be integrated into existing IT systems, leading to significant reductions in carbon emissions and e-waste and improvements in energy efficiency. The development of the GIRT and the findings of this study have the potential to inspire other organizations to adopt sustainable IT practices, positively impact the environment, and be used as a reference by IT professionals and decision-makers to evaluate IT-related sustainability practices. The GIRT could also serve as a benchmark for HEIs to compare their performance with other institutions and to track their progress over time. Additionally, the study's results suggest that virtual and cloud-based technologies could reduce e-waste and energy consumption in the higher education sector. Overall, this study highlights the importance of incorporating Green IT practices into the IT systems of HEI to contribute to a more sustainable future.

Keywords: green information technology, international higher-educational institution, sustainable solutions, environmentally friendly IT systems

Procedia PDF Downloads 72
343 Presenting the Mathematical Model to Determine Retention in the Watersheds

Authors: S. Shamohammadi, L. Razavi

Abstract:

This paper based on the principle concepts of SCS-CN model, a new mathematical model for computation of retention potential (S) presented. In the mathematical model, not only precipitation-runoff concepts in SCS-CN model are precisely represented in a mathematical form, but also new concepts, called “maximum retention” and “total retention” is introduced, and concepts of potential retention capacity, maximum retention, and total retention have been separated from each other. In the proposed model, actual retention (F), maximum actual retention (Fmax), total retention (S), maximum retention (Smax), and potential retention (Sp), for the first time clearly defined, so that Sp is not variable, but a function of morphological characteristics of the watershed. Indeed, based on the mathematical relation of the conceptual curve of SCS-CN model, the proposed model provides a new method for the computation of actual retention in watershed and it simply determined runoff based on. In the corresponding relations, in addition to Precipitation (P), Initial retention (Ia), cumulative values of actual retention capacity (F), total retention (S), runoff (Q), antecedent moisture (M), potential retention (Sp), total retention (S), we introduced Fmax and Fmin referring to maximum and minimum actual retention, respectively. As well as, ksh is a coefficient which depends on morphological characteristics of the watershed. Advantages of the modified version versus the original model include a better precision, higher performance, easier calibration and speed computing.

Keywords: model, mathematical, retention, watershed, SCS

Procedia PDF Downloads 451
342 A Character Detection Method for Ancient Yi Books Based on Connected Components and Regressive Character Segmentation

Authors: Xu Han, Shanxiong Chen, Shiyu Zhu, Xiaoyu Lin, Fujia Zhao, Dingwang Wang

Abstract:

Character detection is an important issue for character recognition of ancient Yi books. The accuracy of detection directly affects the recognition effect of ancient Yi books. Considering the complex layout, the lack of standard typesetting and the mixed arrangement between images and texts, we propose a character detection method for ancient Yi books based on connected components and regressive character segmentation. First, the scanned images of ancient Yi books are preprocessed with nonlocal mean filtering, and then a modified local adaptive threshold binarization algorithm is used to obtain the binary images to segment the foreground and background for the images. Second, the non-text areas are removed by the method based on connected components. Finally, the single character in the ancient Yi books is segmented by our method. The experimental results show that the method can effectively separate the text areas and non-text areas for ancient Yi books and achieve higher accuracy and recall rate in the experiment of character detection, and effectively solve the problem of character detection and segmentation in character recognition of ancient books.

Keywords: CCS concepts, computing methodologies, interest point, salient region detections, image segmentation

Procedia PDF Downloads 127
341 Skills Needed Amongst Secondary School Students for Artificial Intelligence Development in Southeast Nigeria

Authors: Chukwuma Mgboji

Abstract:

Since the advent of Artificial Intelligence, robots have become a major stay in developing societies. Robots are deployed in Education, Health, Food and in other spheres of life. Nigeria a country in West Africa has a very low profile in the advancement of Artificial Intelligence especially in the grass roots. The benefits of Artificial intelligence are not fully maximised and harnessed. Advances in artificial intelligence are perceived as impossible or observed as irrelevant. This study seeks to ascertain the needed skills for the development of artificialintelligence amongst secondary schools in Nigeria. The study focused on South East Nigeria with Five states namely Imo, Abia, Ebonyi, Anambra and Enugu. The sample size is 1000 students drawn from Five Government owned Universities offering Computer Science, Computer Education, Electronics Engineering across the Five South East states. Survey method was used to solicit responses from respondents. The findings from the study identified mathematical skills, analytical skills, problem solving skills, computing skills, programming skills, algorithm skills amongst others. The result of this study to the best of the author’s knowledge will be highly beneficial to all stakeholders involved in the advancements and development of artificial intelligence.

Keywords: artificial intelligence, secondary school, robotics, skills

Procedia PDF Downloads 143