Search results for: computational methods
14674 Secure Bio Semantic Computing Scheme
Authors: Hiroshi Yamaguchi, Phillip C. Y. Sheu, Ryo Fujita, Shigeo Tsujii
Abstract:
In this paper, the secure BioSemantic Scheme is presented to bridge biological/biomedical research problems and computational solutions via semantic computing. Due to the diversity of problems in various research fields, the semantic capability description language (SCDL) plays and important role as a common language and generic form for problem formalization. SCDL is expected the essential for future semantic and logical computing in Biosemantic field. We show several example to Biomedical problems in this paper. Moreover, in the coming age of cloud computing, the security problem is considered to be crucial issue and we presented a practical scheme to cope with this problem.Keywords: biomedical applications, private information retrieval (PIR), semantic capability description language (SCDL), semantic computing
Procedia PDF Downloads 39314673 Software Engineering Inspired Cost Estimation for Process Modelling
Authors: Felix Baumann, Aleksandar Milutinovic, Dieter Roller
Abstract:
Up to this point business process management projects in general and business process modelling projects in particular could not rely on a practical and scientifically validated method to estimate cost and effort. Especially the model development phase is not covered by a cost estimation method or model. Further phases of business process modelling starting with implementation are covered by initial solutions which are discussed in the literature. This article proposes a method of filling this gap by deriving a cost estimation method from available methods in similar domains namely software development or software engineering. Software development is regarded as closely similar to process modelling as we show. After the proposition of this method different ideas for further analysis and validation of the method are proposed. We derive this method from COCOMO II and Function Point which are established methods of effort estimation in the domain of software development. For this we lay out similarities of the software development rocess and the process of process modelling which is a phase of the Business Process Management life-cycle.Keywords: COCOMO II, busines process modeling, cost estimation method, BPM COCOMO
Procedia PDF Downloads 44614672 The Training Demands of Nursing Assistants on Urinary Incontinence in Nursing Homes: A Mixed Methods Study
Authors: Lulu Liao, Huijing Chen, Yinan Zhao, Hongting Ning, Hui Feng
Abstract:
Urinary tract infection rate is an important index of care quality in nursing homes. The aim of the study is to understand the nursing assistant's current knowledge and attitudes of urinary incontinence and to explore related stakeholders' viewpoint about urinary incontinence training. This explanatory sequential study used Knowledge, Practice, and Attitude Model (KAP) and Adult Learning Theories, as the conceptual framework. The researchers collected data from 509 nursing assistants in sixteen nursing homes in Hunan province in China. The questionnaire survey was to assess the knowledge and attitude of urinary incontinence of nursing assistants. On the basis of quantitative research and combined with focus group, training demands were identified, which nurse managers should adopt to improve nursing assistants’ professional practice ability in urinary incontinence. Most nursing assistants held the poor knowledge (14.0 ± 4.18) but had positive attitudes (35.5 ± 3.19) toward urinary incontinence. There was a significant positive correlation between urinary incontinence knowledge and nursing assistants' year of work and educational level, urinary incontinence attitude, and education level (p < 0.001). Despite a general awareness of the importance of prevention of urinary tract infections, not all nurse managers fully valued the training in urinary incontinence compared with daily care training. And the nursing assistants required simple education resources to equip them with skills to address problem about urinary incontinence. The variety of learning methods also highlighted the need for educational materials, and nursing assistants had shown a strong interest in online learning. Related education material should be developed to meet the learning need of nurse assistants and provide suitable training method for planned quality improvement in urinary incontinence.Keywords: mixed methods, nursing assistants, nursing homes, urinary incontinence
Procedia PDF Downloads 14214671 Designing Metal Organic Frameworks for Sustainable CO₂ Utilization
Authors: Matthew E. Potter, Daniel J. Stewart, Lindsay M. Armstrong, Pier J. A. Sazio, Robert R. Raja
Abstract:
Rising CO₂ levels in the atmosphere means that CO₂ is a highly desirable feedstock. This requires specific catalysts to be designed to activate this inert molecule, combining a catalytic site tailored for CO₂ transformations with a support that can readily adsorb CO₂. Metal organic frameworks (MOFs) are regularly used as CO₂ sorbents. The organic nature of the linker molecules, connecting the metal nodes, offers many post-synthesis modifications to introduce catalytic active sites into the frameworks. However, the metal nodes may be coordinatively unsaturated, allowing them to bind to organic moieties. Imidazoles have shown promise catalyzing the formation of cyclic carbonates from epoxides with CO₂. Typically, this synthesis route employs toxic reagents such as phosgene, liberating HCl. Therefore an alternative route with CO₂ is highly appealing. In this work we design active sites for CO₂ activation, by tethering substituted-imidazole organocatalytic species to the available Cr3+ metal nodes of a Cr-MIL-101 MOF, for the first time, to create a tailored species for carbon capture utilization applications. Our tailored design strategy combining a CO₂ sorbent, Cr-MIL-101, with an anchored imidazole results in a highly active and selective multifunctional catalyst, achieving turnover frequencies of over 750 hr-1. These findings demonstrate the synergy between the MOF framework and imidazoles for CO₂ utilization applications. Further, the effect of substrate variation has been explored yielding mechanistic insights into this process. Through characterization, we show that the structural and compositional integrity of the Cr-MIL-101 has been preserved on functionalizing the imidazoles. Further, we show the binding of the imidazoles to the Cr3+ metal nodes. This can be seen through our EPR study, where the distortion of the Cr3+ on binding to the imidazole shows the CO₂ binding site is close to the active imidazole. This has a synergistic effect, improving catalytic performance. We believe the combination of MOF support and organocatalyst allows many possibilities to generate new multifunctional catalysts for CO₂ utilisation. In conclusion, we have validated our design procedure, combining a known CO₂ sorbent, with an active imidazole species to create a unique tailored multifunctional catalyst for CO₂ utilization. This species achieves high activity and selectivity for the formation of cyclic carbonates and offers a sustainable alternative to traditional synthesis methods. This work represents a unique design strategy for CO₂ utilization while offering exciting possibilities for further work in characterization, computational modelling, and post-synthesis modification.Keywords: carbonate, catalysis, MOF, utilisation
Procedia PDF Downloads 18114670 Empirical Exploration for the Correlation between Class Object-Oriented Connectivity-Based Cohesion and Coupling
Authors: Jehad Al Dallal
Abstract:
Attributes and methods are the basic contents of an object-oriented class. The connectivity among these class members and the relationship between the class and other classes play an important role in determining the quality of an object-oriented system. Class cohesion evaluates the degree of relatedness of class attributes and methods, whereas class coupling refers to the degree to which a class is related to other classes. Researchers have proposed several class cohesion and class coupling measures. However, the correlation between class coupling and class cohesion measures have not been thoroughly studied. In this paper, using classes of three open-source Java systems, we empirically investigate the correlation between several measures of connectivity-based class cohesion and coupling. Four connectivity-based cohesion measures and eight coupling measures are considered in the empirical study. The empirical study results show that class connectivity-based cohesion and coupling internal quality attributes are inversely correlated. The strength of the correlation depends highly on the cohesion and coupling measurement approaches.Keywords: object-oriented class, software quality, class cohesion measure, class coupling measure
Procedia PDF Downloads 32614669 Alternate Methods to Visualize 2016 U.S. Presidential Election Result
Authors: Hong Beom Hur
Abstract:
Politics in America is polarized. The best illustration of this is the 2016 presidential election result map. States with megacities like California, New York, Illinois, Virginia, and others are marked blue to signify the color of the Democratic party. States located in inland and south like Texas, Florida, Tennesse, Kansas and others are marked red to signify the color of the Republican party. Such a stark difference between two colors, red and blue, combined with geolocations of each state with their borderline remarks one central message; America is divided into two colors between urban Democrats and rural Republicans. This paper seeks to defy the visualization by pointing out its limitations and search for alternative ways to visualize the 2016 election result. One such limitation is that geolocations of each state and state borderlines limit the visualization of population density. As a result, the election result map does not convey the fact that Clinton won the popular vote and only accentuates the voting patterns of urban and rural states. The paper seeks whether an alternative narrative can be observed by factoring in the population number into the size of each state and manipulating the state borderline according to the normalization. Yet another alternative narrative may be reached by factoring the size of each state by the number of the electoral college of each state by voting and visualize the number. Other alternatives will be discussed but are not implemented in visualization. Such methods include dividing the land of America into about 120 million cubes each representing a voter or by the number of whole population 300 million cubes. By exploring these alternative methods to visualize the politics of the 2016 election map, the public may be able to question whether it is possible to be free from the narrative of the divide-conquer when interpreting the election map and to look at both parties as a story of the United States of America.Keywords: 2016 U.S. presidential election, data visualization, population scale, geo-political
Procedia PDF Downloads 12714668 Radiation Usage Impact of on Anti-Nutritional Compounds (Antitrypsin and Phytic Acid) of Livestock and Poultry Foods
Authors: Mohammad Khosravi, Ali Kiani, Behroz Dastar, Parvin Showrang
Abstract:
Review was carried out on important anti-nutritional compounds of livestock and poultry foods and the effect of radiation usage. Nowadays, with advancement in technology, different methods have been considered for the optimum usage of nutrients in livestock and poultry foods. Steaming, extruding, pelleting, and the use of chemicals are the most common and popular methods in food processing. Use of radiation in food processing researches in the livestock and poultry industry is currently highly regarded. Ionizing (electrons, gamma) and non-ionizing beams (microwave and infrared) are the most useable rays in animal food processing. In recent researches, these beams have been used to remove and reduce the anti-nutritional factors and microbial contamination and improve the digestibility of nutrients in poultry and livestock food. The evidence presented will help researchers to recognize techniques of relevance to them. Simplification of some of these techniques, especially in developing countries, must be addressed so that they can be used more widely.Keywords: antitrypsin, gamma anti-nutritional components, phytic acid, radiation
Procedia PDF Downloads 35014667 Study of Magnetic Nanoparticles’ Endocytosis in a Single Cell Level
Authors: Jefunnie Matahum, Yu-Chi Kuo, Chao-Ming Su, Tzong-Rong Ger
Abstract:
Magnetic cell labeling is of great importance in various applications in biomedical fields such as cell separation and cell sorting. Since analytical methods for quantification of cell uptake of magnetic nanoparticles (MNPs) are already well established, image analysis on single cell level still needs more characterization. This study reports an alternative non-destructive quantification methods of single-cell uptake of positively charged MNPs. Magnetophoresis experiments were performed to calculate the number of MNPs in a single cell. Mobility of magnetic cells and the area of intracellular MNP stained by Prussian blue were quantified by image processing software. ICP-MS experiments were also performed to confirm the internalization of MNPs to cells. Initial results showed that the magnetic cells incubated at 100 µg and 50 µg MNPs/mL concentration move at 18.3 and 16.7 µm/sec, respectively. There is also an increasing trend in the number and area of intracellular MNP with increasing concentration. These results could be useful in assessing the nanoparticle uptake in a single cell level.Keywords: magnetic nanoparticles, single cell, magnetophoresis, image analysis
Procedia PDF Downloads 33614666 Electromagnetic Wave Propagation Equations in 2D by Finite Difference Method
Authors: N. Fusun Oyman Serteller
Abstract:
In this paper, the techniques to solve time dependent electromagnetic wave propagation equations based on the Finite Difference Method (FDM) are proposed by comparing the results with Finite Element Method (FEM) in 2D while discussing some special simulation examples. Here, 2D dynamical wave equations for lossy media, even with a constant source, are discussed for establishing symbolic manipulation of wave propagation problems. The main objective of this contribution is to introduce a comparative study of two suitable numerical methods and to show that both methods can be applied effectively and efficiently to all types of wave propagation problems, both linear and nonlinear cases, by using symbolic computation. However, the results show that the FDM is more appropriate for solving the nonlinear cases in the symbolic solution. Furthermore, some specific complex domain examples of the comparison of electromagnetic waves equations are considered. Calculations are performed through Mathematica software by making some useful contribution to the programme and leveraging symbolic evaluations of FEM and FDM.Keywords: finite difference method, finite element method, linear-nonlinear PDEs, symbolic computation, wave propagation equations
Procedia PDF Downloads 14914665 Dynamic Log Parsing and Intelligent Anomaly Detection Method Combining Retrieval Augmented Generation and Prompt Engineering
Authors: Liu Linxin
Abstract:
As system complexity increases, log parsing and anomaly detection become more and more important in ensuring system stability. However, traditional methods often face the problems of insufficient adaptability and decreasing accuracy when dealing with rapidly changing log contents and unknown domains. To this end, this paper proposes an approach LogRAG, which combines RAG (Retrieval Augmented Generation) technology with Prompt Engineering for Large Language Models, applied to log analysis tasks to achieve dynamic parsing of logs and intelligent anomaly detection. By combining real-time information retrieval and prompt optimisation, this study significantly improves the adaptive capability of log analysis and the interpretability of results. Experimental results show that the method performs well on several public datasets, especially in the absence of training data, and significantly outperforms traditional methods. This paper provides a technical path for log parsing and anomaly detection, demonstrating significant theoretical value and application potential.Keywords: log parsing, anomaly detection, retrieval-augmented generation, prompt engineering, LLMs
Procedia PDF Downloads 3614664 Anomaly Detection with ANN and SVM for Telemedicine Networks
Authors: Edward Guillén, Jeisson Sánchez, Carlos Omar Ramos
Abstract:
In recent years, a wide variety of applications are developed with Support Vector Machines -SVM- methods and Artificial Neural Networks -ANN-. In general, these methods depend on intrusion knowledge databases such as KDD99, ISCX, and CAIDA among others. New classes of detectors are generated by machine learning techniques, trained and tested over network databases. Thereafter, detectors are employed to detect anomalies in network communication scenarios according to user’s connections behavior. The first detector based on training dataset is deployed in different real-world networks with mobile and non-mobile devices to analyze the performance and accuracy over static detection. The vulnerabilities are based on previous work in telemedicine apps that were developed on the research group. This paper presents the differences on detections results between some network scenarios by applying traditional detectors deployed with artificial neural networks and support vector machines.Keywords: anomaly detection, back-propagation neural networks, network intrusion detection systems, support vector machines
Procedia PDF Downloads 36214663 Single Machine Scheduling Problem to Minimize the Number of Tardy Jobs
Authors: Ali Allahverdi, Harun Aydilek, Asiye Aydilek
Abstract:
Minimizing the number of tardy jobs is an important factor to consider while making scheduling decisions. This is because on-time shipments are vital for lowering cost and increasing customers’ satisfaction. This paper addresses the single machine scheduling problem with the objective of minimizing the number of tardy jobs. The only known information is the lower and upper bounds for processing times, and deterministic job due dates. A dominance relation is established, and an algorithm is proposed. Several heuristics are generated from the proposed algorithm. Computational analysis indicates that the performance of one of the heuristics is very close to the optimal solution, i.e., on average, less than 1.5 % from the optimal solution.Keywords: single machine scheduling, number of tardy jobs, heuristi, lower and upper bounds
Procedia PDF Downloads 55814662 Basic Modal Displacements (BMD) for Optimizing the Buildings Subjected to Earthquakes
Authors: Seyed Sadegh Naseralavi, Mohsen Khatibinia
Abstract:
In structural optimizations through meta-heuristic algorithms, analyses of structures are performed for many times. For this reason, performing the analyses in a time saving way is precious. The importance of the point is more accentuated in time-history analyses which take much time. To this aim, peak picking methods also known as spectrum analyses are generally utilized. However, such methods do not have the required accuracy either done by square root of sum of squares (SRSS) or complete quadratic combination (CQC) rules. The paper presents an efficient technique for evaluating the dynamic responses during the optimization process with high speed and accuracy. In the method, first by using a static equivalent of the earthquake, an initial design is obtained. Then, the displacements in the modal coordinates are achieved. The displacements are herein called basic modal displacements (MBD). For each new design of the structure, the responses can be derived by well scaling each of the MBD along the time and amplitude and superposing them together using the corresponding modal matrices. To illustrate the efficiency of the method, an optimization problems is studied. The results show that the proposed approach is a suitable replacement for the conventional time history and spectrum analyses in such problems.Keywords: basic modal displacements, earthquake, optimization, spectrum
Procedia PDF Downloads 36414661 Solid Particles Transport and Deposition Prediction in a Turbulent Impinging Jet Using the Lattice Boltzmann Method and a Probabilistic Model on GPU
Authors: Ali Abdul Kadhim, Fue Lien
Abstract:
Solid particle distribution on an impingement surface has been simulated utilizing a graphical processing unit (GPU). In-house computational fluid dynamics (CFD) code has been developed to investigate a 3D turbulent impinging jet using the lattice Boltzmann method (LBM) in conjunction with large eddy simulation (LES) and the multiple relaxation time (MRT) models. This paper proposed an improvement in the LBM-cellular automata (LBM-CA) probabilistic method. In the current model, the fluid flow utilizes the D3Q19 lattice, while the particle model employs the D3Q27 lattice. The particle numbers are defined at the same regular LBM nodes, and transport of particles from one node to its neighboring nodes are determined in accordance with the particle bulk density and velocity by considering all the external forces. The previous models distribute particles at each time step without considering the local velocity and the number of particles at each node. The present model overcomes the deficiencies of the previous LBM-CA models and, therefore, can better capture the dynamic interaction between particles and the surrounding turbulent flow field. Despite the increasing popularity of LBM-MRT-CA model in simulating complex multiphase fluid flows, this approach is still expensive in term of memory size and computational time required to perform 3D simulations. To improve the throughput of each simulation, a single GeForce GTX TITAN X GPU is used in the present work. The CUDA parallel programming platform and the CuRAND library are utilized to form an efficient LBM-CA algorithm. The methodology was first validated against a benchmark test case involving particle deposition on a square cylinder confined in a duct. The flow was unsteady and laminar at Re=200 (Re is the Reynolds number), and simulations were conducted for different Stokes numbers. The present LBM solutions agree well with other results available in the open literature. The GPU code was then used to simulate the particle transport and deposition in a turbulent impinging jet at Re=10,000. The simulations were conducted for L/D=2,4 and 6, where L is the nozzle-to-surface distance and D is the jet diameter. The effect of changing the Stokes number on the particle deposition profile was studied at different L/D ratios. For comparative studies, another in-house serial CPU code was also developed, coupling LBM with the classical Lagrangian particle dispersion model. Agreement between results obtained with LBM-CA and LBM-Lagrangian models and the experimental data is generally good. The present GPU approach achieves a speedup ratio of about 350 against the serial code running on a single CPU.Keywords: CUDA, GPU parallel programming, LES, lattice Boltzmann method, MRT, multi-phase flow, probabilistic model
Procedia PDF Downloads 21014660 Numerical Study of the Dynamic Behavior of an Air Conditioning with a Muti Confined Swirling Jet
Authors: Mohamed Roudane
Abstract:
The objective of this study is to know the dynamic behavior of a multi swirling jet used for air conditioning inside a room. To conduct this study, we designed a facility to ensure proper conditions of confinement in which we placed five air blowing devices with adjustable vanes, providing multiple swirling turbulent jets. The jets were issued in the same direction and the same spacing defined between them. This study concerned the numerical simulation of the dynamic mixing of confined swirling multi-jets, and examined the influence of important parameters of a swirl diffuser system on the dynamic performance characteristics. The CFD investigations are carried out by a hybrid mesh to discretize the computational domain. In this work, the simulations have been performed using the finite volume method and FLUENT solver, in which the standard k-ε RNG turbulence model was used for turbulence computations.Keywords: simulation, dynamic behavior, swirl, turbulent jet
Procedia PDF Downloads 40214659 Experimental and Numerical Analyses of Tehran Research Reactor
Authors: A. Lashkari, H. Khalafi, H. Khazeminejad, S. Khakshourniya
Abstract:
In this paper, a numerical model is presented. The model is used to analyze a steady state thermo-hydraulic and reactivity insertion transient in TRR reference cores respectively. The model predictions are compared with the experiments and PARET code results. The model uses the piecewise constant and lumped parameter methods for the coupled point kinetics and thermal-hydraulics modules respectively. The advantages of the piecewise constant method are simplicity, efficiency and accuracy. A main criterion on the applicability range of this model is that the exit coolant temperature remains below the saturation temperature, i.e. no bulk boiling occurs in the core. The calculation values of power and coolant temperature, in steady state and positive reactivity insertion scenario, are in good agreement with the experiment values. However, the model is a useful tool for the transient analysis of most research reactor encountered in practice. The main objective of this work is using simple calculation methods and benchmarking them with experimental data. This model can be used for training proposes.Keywords: thermal-hydraulic, research reactor, reactivity insertion, numerical modeling
Procedia PDF Downloads 40414658 Optimal Design of Friction Dampers for Seismic Retrofit of a Moment Frame
Authors: Hyungoo Kang, Jinkoo Kim
Abstract:
This study investigated the determination of the optimal location and friction force of friction dampers to effectively reduce the seismic response of a reinforced concrete structure designed without considering seismic load. To this end, the genetic algorithm process was applied and the results were compared with those obtained by simplified methods such as distribution of dampers based on the story shear or the inter-story drift ratio. The seismic performance of the model structure with optimally positioned friction dampers was evaluated by nonlinear static and dynamic analyses. The analysis results showed that compared with the system without friction dampers, the maximum roof displacement and the inter-story drift ratio were reduced by about 30% and 40%, respectively. After installation of the dampers about 70% of the earthquake input energy was dissipated by the dampers and the energy dissipated in the structural elements was reduced by about 50%. In comparison with the simplified methods of installation, the genetic algorithm provided more efficient solutions for seismic retrofit of the model structure.Keywords: friction dampers, genetic algorithm, optimal design, RC buildings
Procedia PDF Downloads 24714657 CFD Simulation of Surge Wave Generated by Flow-Like Landslides
Authors: Liu-Chao Qiu
Abstract:
The damage caused by surge waves generated in water bodies by flow-like landslides can be very high in terms of human lives and economic losses. The complicated phenomena occurred in this highly unsteady process are difficult to model because three interacting phases: air, water and sediment are involved. The problem therefore is challenging since the effects of non-Newtonian fluid describing the rheology of the flow-like landslides, multi-phase flow and free surface have to be included in the simulation. In this work, the commercial computational fluid dynamics (CFD) package FLUENT is used to model the surge waves due to flow-like landslides. The comparison between the numerical results and experimental data reported in the literature confirms the accuracy of the method.Keywords: flow-like landslide, surge wave, VOF, non-Newtonian fluids, multi-phase flows, free surface flow
Procedia PDF Downloads 42114656 Economized Sensor Data Processing with Vehicle Platooning
Authors: Henry Hexmoor, Kailash Yelasani
Abstract:
We present vehicular platooning as a special case of crowd-sensing framework where sharing sensory information among a crowd is used for their collective benefit. After offering an abstract policy that governs processes involving a vehicular platoon, we review several common scenarios and components surrounding vehicular platooning. We then present a simulated prototype that illustrates efficiency of road usage and vehicle travel time derived from platooning. We have argued that one of the paramount benefits of platooning that is overlooked elsewhere, is the substantial computational savings (i.e., economizing benefits) in acquisition and processing of sensory data among vehicles sharing the road. The most capable vehicle can share data gathered from its sensors with nearby vehicles grouped into a platoon.Keywords: cloud network, collaboration, internet of things, social network
Procedia PDF Downloads 19814655 Machine Learning in Patent Law: How Genetic Breeding Algorithms Challenge Modern Patent Law Regimes
Authors: Stefan Papastefanou
Abstract:
Artificial intelligence (AI) is an interdisciplinary field of computer science with the aim of creating intelligent machine behavior. Early approaches to AI have been configured to operate in very constrained environments where the behavior of the AI system was previously determined by formal rules. Knowledge was presented as a set of rules that allowed the AI system to determine the results for specific problems; as a structure of if-else rules that could be traversed to find a solution to a particular problem or question. However, such rule-based systems typically have not been able to generalize beyond the knowledge provided. All over the world and especially in IT-heavy industries such as the United States, the European Union, Singapore, and China, machine learning has developed to be an immense asset, and its applications are becoming more and more significant. It has to be examined how such products of machine learning models can and should be protected by IP law and for the purpose of this paper patent law specifically, since it is the IP law regime closest to technical inventions and computing methods in technical applications. Genetic breeding models are currently less popular than recursive neural network method and deep learning, but this approach can be more easily described by referring to the evolution of natural organisms, and with increasing computational power; the genetic breeding method as a subset of the evolutionary algorithms models is expected to be regaining popularity. The research method focuses on patentability (according to the world’s most significant patent law regimes such as China, Singapore, the European Union, and the United States) of AI inventions and machine learning. Questions of the technical nature of the problem to be solved, the inventive step as such, and the question of the state of the art and the associated obviousness of the solution arise in the current patenting processes. Most importantly, and the key focus of this paper is the problem of patenting inventions that themselves are developed through machine learning. The inventor of a patent application must be a natural person or a group of persons according to the current legal situation in most patent law regimes. In order to be considered an 'inventor', a person must actually have developed part of the inventive concept. The mere application of machine learning or an AI algorithm to a particular problem should not be construed as the algorithm that contributes to a part of the inventive concept. However, when machine learning or the AI algorithm has contributed to a part of the inventive concept, there is currently a lack of clarity regarding the ownership of artificially created inventions. Since not only all European patent law regimes but also the Chinese and Singaporean patent law approaches include identical terms, this paper ultimately offers a comparative analysis of the most relevant patent law regimes.Keywords: algorithms, inventor, genetic breeding models, machine learning, patentability
Procedia PDF Downloads 11214654 A Simulation Modeling Approach for Optimization of Storage Space Allocation in Container Terminal
Authors: Gamal Abd El-Nasser A. Said, El-Sayed M. El-Horbaty
Abstract:
Container handling problems at container terminals are NP-hard problems. This paper presents an approach using discrete-event simulation modeling to optimize solution for storage space allocation problem, taking into account all various interrelated container terminal handling activities. The proposed approach is applied on a real case study data of container terminal at Alexandria port. The computational results show the effectiveness of the proposed model for optimization of storage space allocation in container terminal where 54% reduction in containers handling time in port is achieved.Keywords: container terminal, discrete-event simulation, optimization, storage space allocation
Procedia PDF Downloads 33214653 Design Optimization and Thermoacoustic Analysis of Pulse Tube Cryocooler Components
Authors: K. Aravinth, C. T. Vignesh
Abstract:
The usage of pulse tube cryocoolers is significantly increased mainly due to the advantage of the absence of moving parts. The underlying idea of this project is to optimize the design of pulse tube, regenerator, a resonator in cryocooler and analyzing the thermo-acoustic oscillations with respect to the design parameters. Computational Fluid Dynamic (CFD) model with time-dependent validation is done to predict its performance. The continuity, momentum, and energy equations are solved for various porous media regions. The effect of changing the geometries and orientation will be validated and investigated in performance. The pressure, temperature and velocity fields in the regenerator and pulse tube are evaluated. This optimized design performance results will be compared with the existing pulse tube cryocooler design. The sinusoidal behavior of cryocooler in acoustic streaming patterns in pulse tube cryocooler will also be evaluated.Keywords: acoustics, cryogenics, design, optimization
Procedia PDF Downloads 18014652 Trajectory Planning Algorithms for Autonomous Agricultural Vehicles
Authors: Caner Koc, Dilara Gerdan Koc, Mustafa Vatandas
Abstract:
The fundamental components of autonomous agricultural robot design, such as having a working understanding of coordinates, correctly constructing the desired route, and sensing environmental elements, are the most important. A variety of sensors, hardware, and software are employed by agricultural robots to find these systems.These enable the fully automated driving system of an autonomous vehicle to simulate how a human-driven vehicle would respond to changing environmental conditions. To calculate the vehicle's motion trajectory using data from the sensors, this automation system typically consists of a sophisticated software architecture based on object detection and driving decisions. In this study, the software architecture of an autonomous agricultural vehicle is compared to the trajectory planning techniques.Keywords: agriculture 5.0, computational intelligence, motion planning, trajectory planning
Procedia PDF Downloads 8214651 Estimation of Ribb Dam Catchment Sediment Yield and Reservoir Effective Life Using Soil and Water Assessment Tool Model and Empirical Methods
Authors: Getalem E. Haylia
Abstract:
The Ribb dam is one of the irrigation projects in the Upper Blue Nile basin, Ethiopia, to irrigate the Fogera plain. Reservoir sedimentation is a major problem because it reduces the useful reservoir capacity by the accumulation of sediments coming from the watersheds. Estimates of sediment yield are needed for studies of reservoir sedimentation and planning of soil and water conservation measures. The objective of this study was to simulate the Ribb dam catchment sediment yield using SWAT model and to estimate Ribb reservoir effective life according to trap efficiency methods. The Ribb dam catchment is found in North Western part of Ethiopia highlands, and it belongs to the upper Blue Nile and Lake Tana basins. Soil and Water Assessment Tool (SWAT) was selected to simulate flow and sediment yield in the Ribb dam catchment. The model sensitivity, calibration, and validation analysis at Ambo Bahir site were performed with Sequential Uncertainty Fitting (SUFI-2). The flow data at this site was obtained by transforming the Lower Ribb gauge station (2002-2013) flow data using Area Ratio Method. The sediment load was derived based on the sediment concentration yield curve of Ambo site. Stream flow results showed that the Nash-Sutcliffe efficiency coefficient (NSE) was 0.81 and the coefficient of determination (R²) was 0.86 in calibration period (2004-2010) and, 0.74 and 0.77 in validation period (2011-2013), respectively. Using the same periods, the NS and R² for the sediment load calibration were 0.85 and 0.79 and, for the validation, it became 0.83 and 0.78, respectively. The simulated average daily flow rate and sediment yield generated from Ribb dam watershed were 3.38 m³/s and 1772.96 tons/km²/yr, respectively. The effective life of Ribb reservoir was estimated using the developed empirical methods of the Brune (1953), Churchill (1948) and Brown (1958) methods and found to be 30, 38 and 29 years respectively. To conclude, massive sediment comes from the steep slope agricultural areas, and approximately 98-100% of this incoming annual sediment loads have been trapped by the Ribb reservoir. In Ribb catchment, as well as reservoir systematic and thorough consideration of technical, social, environmental, and catchment managements and practices should be made to lengthen the useful life of Ribb reservoir.Keywords: catchment, reservoir effective life, reservoir sedimentation, Ribb, sediment yield, SWAT model
Procedia PDF Downloads 19214650 Mobile Wireless Investigation Platform
Authors: Dimitar Karastoyanov, Todor Penchev
Abstract:
The paper presents the research of a kind of autonomous mobile robots, intended for work and adaptive perception in unknown and unstructured environment. The objective are robots, dedicated for multi-sensory environment perception and exploration, like measurements and samples taking, discovering and putting a mark on the objects as well as environment interactions–transportation, carrying in and out of equipment and objects. At that ground classification of the different types mobile robots in accordance with the way of locomotion (wheel- or chain-driven, walking, etc.), used drive mechanisms, kind of sensors, end effectors, area of application, etc. is made. Modular system for the mechanical construction of the mobile robots is proposed. Special PLC on the base of AtMega128 processor for robot control is developed. Electronic modules for the wireless communication on the base of Jennic processor as well as the specific software are developed. The methods, means and algorithms for adaptive environment behaviour and tasks realization are examined. The methods of group control of mobile robots and for suspicious objects detecting and handling are discussed too.Keywords: mobile robots, wireless communications, environment investigations, group control, suspicious objects
Procedia PDF Downloads 36214649 Preserving Heritage in the Face of Natural Disasters: Lessons from the Bam Experience in Iran
Authors: Mohammad Javad Seddighi, Avar Almukhtar
Abstract:
The occurrence of natural disasters, such as floods and earthquakes, can cause significant damage to heritage sites and surrounding areas. In Iran, the city of Bam was devastated by an earthquake in 2003, which had a major impact on the rivers and watercourses around the city. This study aims to investigate the environmental design techniques and sustainable hazard mitigation strategies that can be employed to preserve heritage sites in the face of natural disasters, using the Bam experience as a case study. The research employs a mixed-methods approach, combining both qualitative and quantitative data collection and analysis methods. The study begins with a comprehensive literature review of recent publications on environmental design techniques and sustainable hazard mitigation strategies in heritage conservation. This is followed by a field study of the rivers and watercourses around Bam, including the Adoori River (Talangoo) and other watercourses, to assess the current conditions and identify potential hazards. The data collected from the field study is analysed using statistical methods and GIS mapping techniques. The findings of this study reveal the importance of sustainable hazard mitigation strategies and environmental design techniques in preserving heritage sites during natural disasters. The study suggests that these techniques can be used to prevent the outbreak of another natural disaster in Bam and the surrounding areas. Specifically, the study recommends the establishment of a comprehensive early warning system, the creation of flood-resistant landscapes, and the use of eco-friendly building materials in the reconstruction of heritage sites. These findings contribute to the current knowledge of sustainable hazard mitigation and environmental design in heritage conservation.Keywords: natural disasters, heritage conservation, sustainable hazard mitigation, environmental design, landscape architecture, flood management, disaster resilience
Procedia PDF Downloads 9214648 Models, Methods and Technologies for Protection of Critical Infrastructures from Cyber-Physical Threats
Authors: Ivan Župan
Abstract:
Critical infrastructure is essential for the functioning of a country and is designated for special protection by governments worldwide. Due to the increase in smart technology usage in every facet of the industry, including critical infrastructure, the exposure to malicious cyber-physical attacks has grown in the last few years. Proper security measures must be undertaken in order to defend against cyber-physical threats that can disrupt the normal functioning of critical infrastructure and, consequently the functioning of the country. This paper provides a review of the scientific literature of models, methods and technologies used to protect from cyber-physical threats in industries. The focus of the literature was observed from three aspects. The first aspect, resilience, concerns itself with the robustness of the system’s defense against threats, as well as preparation and education about potential future threats. The second aspect concerns security risk management for systems with cyber-physical aspects, and the third aspect investigates available testbed environments for testing developed models on scaled models of vulnerable infrastructure.Keywords: critical infrastructure, cyber-physical security, smart industry, security methodology, security technology
Procedia PDF Downloads 8214647 Multiscale Connected Component Labelling and Applications to Scientific Microscopy Image Processing
Authors: Yayun Hsu, Henry Horng-Shing Lu
Abstract:
In this paper, a new method is proposed to extending the method of connected component labeling from processing binary images to multi-scale modeling of images. By using the adaptive threshold of multi-scale attributes, this approach minimizes the possibility of missing those important components with weak intensities. In addition, the computational cost of this approach remains similar to that of the typical approach of component labeling. Then, this methodology is applied to grain boundary detection and Drosophila Brain-bow neuron segmentation. These demonstrate the feasibility of the proposed approach in the analysis of challenging microscopy images for scientific discovery.Keywords: microscopic image processing, scientific data mining, multi-scale modeling, data mining
Procedia PDF Downloads 44414646 Alternative Method of Determining Seismic Loads on Buildings Without Response Spectrum Application
Authors: Razmik Atabekyan, V. Atabekyan
Abstract:
This article discusses a new alternative method for determination of seismic loads on buildings, based on resistance of structures to deformations of vibrations. The basic principles for determining seismic loads by spectral method were developed in 40… 50ies of the last century and further have been improved to pursuit true assessments of seismic effects. The base of the existing methods to determine seismic loads is response spectrum or dynamicity coefficient β (norms of RF), which are not definitively established. To this day there is no single, universal method for the determination of seismic loads and when trying to apply the norms of different countries, significant discrepancies between the results are obtained. On the other hand there is a contradiction of the results of macro seismic surveys of strong earthquakes with the principle of the calculation based on accelerations. It is well-known, on soft soils there is an increase of destructions (mainly due to large displacements), even though the accelerations decreases. Obviously, the seismic impacts are transmitted to the building through foundation, but paradoxically, the existing methods do not even include foundation data. Meanwhile acceleration of foundation of the building can differ several times from the acceleration of the ground. During earthquakes each building has its own peculiarities of behavior, depending on the interaction between the soil and the foundations, their dynamic characteristics and many other factors. In this paper we consider a new, alternative method of determining the seismic loads on buildings, without the use of response spectrum. The following main conclusions: 1) Seismic loads are revealed at the foundation level, which leads to redistribution and reduction of seismic loads on structures. 2) The proposed method is universal and allows determine the seismic loads without the use of response spectrum and any implicit coefficients. 3) The possibility of taking into account important factors such as the strength characteristics of the soils, the size of the foundation, the angle of incidence of the seismic ray and others. 4) Existing methods can adequately determine the seismic loads on buildings only for first form of vibrations, at an average soil conditions.Keywords: seismic loads, response spectrum, dynamic characteristics of buildings, momentum
Procedia PDF Downloads 50714645 Contrast Enhancement in Digital Images Using an Adaptive Unsharp Masking Method
Authors: Z. Mortezaie, H. Hassanpour, S. Asadi Amiri
Abstract:
Captured images may suffer from Gaussian blur due to poor lens focus or camera motion. Unsharp masking is a simple and effective technique to boost the image contrast and to improve digital images suffering from Gaussian blur. The technique is based on sharpening object edges by appending the scaled high-frequency components of the image to the original. The quality of the enhanced image is highly dependent on the characteristics of both the high-frequency components and the scaling/gain factor. Since the quality of an image may not be the same throughout, we propose an adaptive unsharp masking method in this paper. In this method, the gain factor is computed, considering the gradient variations, for individual pixels of the image. Subjective and objective image quality assessments are used to compare the performance of the proposed method both with the classic and the recently developed unsharp masking methods. The experimental results show that the proposed method has a better performance in comparison to the other existing methods.Keywords: unsharp masking, blur image, sub-region gradient, image enhancement
Procedia PDF Downloads 216