Search results for: sketch graph of uncertainty principle
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2530

Search results for: sketch graph of uncertainty principle

1810 Energy Dissipation Characteristics of an Elastomer under Dynamic Condition: A Comprehensive Assessment Using High and Low Frequency Analyser

Authors: K. Anas, M. Selvakumar, Samson David, R. R. Babu, S. Chattopadhyay

Abstract:

The dynamic deformation of a visco elastic material can cause heat generation. This heat generation is aspect energy dissipation. The present work investigates the contribution of various factors like; elastomer structure, cross link type and density, filler networking, reinforcement potential and temperature at energy dissipation mechanism. The influences of these elements are investigated using very high frequency analyzer (VHF ) and dynamical mechanical analysis(DMA).VHF follows transmissibility and vibration isolation principle whereas DMA works on dynamical mechanical deformation principle. VHF analysis of different types of elastomers reveals that elastomer can act as a transmitter or damper of energy depending on the applied frequency ratio (ω/ωn). Dynamic modulus (G') of low damping rubbers like natural rubber does not varies rapidly with frequency but vice-versa for high damping rubber like butyl rubber (IIR). VHF analysis also depicts that polysulfidic linkages has high damping ratio (ζ) than mono sulfidic linkages due to its dissipative nature. At comparable cross link density, mono sulfidic linkages shows higher glass transition temperature (Tg) than poly sulfidic linkages. The intensity and location of loss modulus (G'') peak of different types of carbon black filled natural rubber compounds suggests that segmental relaxation at glass transition temperature (Tg) is seldom affected by filler particles, but the filler networks can influence the cross link density by absorbing the curatives. The filler network breaking and reformation during a dynamic strain is a thermally activated process. Thus, stronger aggregates are highly dissipative in nature. Measurements indicate that at lower temperature regimes polymeric chain friction is highly dissipative in nature.

Keywords: damping ratio, natural frequency, crosslinking density, segmental motion, surface activity, dissipative, polymeric chain friction

Procedia PDF Downloads 289
1809 Fuzzy Logic in Detecting Children with Behavioral Disorders

Authors: David G. Maxinez, Andrés Ferreyra Ramírez, Liliana Castillo Sánchez, Nancy Adán Mendoza, Carlos Aviles Cruz

Abstract:

This research describes the use of fuzzy logic in detection, assessment, analysis and evaluation of children with behavioral disorders. It shows how to acquire and analyze ambiguous, vague and full of uncertainty data coming from the input variables to get an accurate assessment result for each of the typologies presented by children with behavior problems. Behavior disorders analyzed in this paper are: hyperactivity (H), attention deficit with hyperactivity (DAH), conduct disorder (TD) and attention deficit (AD).

Keywords: alteration, behavior, centroid, detection, disorders, economic, fuzzy logic, hyperactivity, impulsivity, social

Procedia PDF Downloads 557
1808 A Modified Shannon Entropy Measure for Improved Image Segmentation

Authors: Mohammad A. U. Khan, Omar A. Kittaneh, M. Akbar, Tariq M. Khan, Husam A. Bayoud

Abstract:

The Shannon Entropy measure has been widely used for measuring uncertainty. However, in partial settings, the histogram is used to estimate the underlying distribution. The histogram is dependent on the number of bins used. In this paper, a modification is proposed that makes the Shannon entropy based on histogram consistent. For providing the benefits, two application are picked in medical image processing applications. The simulations are carried out to show the superiority of this modified measure for image segmentation problem. The improvement may be contributed to robustness shown to uneven background in images.

Keywords: Shannon entropy, medical image processing, image segmentation, modification

Procedia PDF Downloads 491
1807 Kinetic Rate Comparison of Methane Catalytic Combustion of Palladium Catalysts Impregnated onto ɤ-Alumina and Bio-Char

Authors: Noor S. Nasri, Eric C. A. Tatt, Usman D. Hamza, Jibril Mohammed, Husna M. Zain

Abstract:

Climate change has becoming a global environmental issue that may trigger irreversible changes in the environment with catastrophic consequences for human, animals and plants on our planet. Methane, carbon dioxide and nitrous oxide are the greenhouse gases (GHG) and as the main factor that significantly contributes to the global warming. Mainly carbon dioxide be produced and released to atmosphere by thermal industrial and power generation sectors. Methane is dominant component of natural gas releases significant of thermal heat, and the gaseous pollutants when homogeneous thermal combustion takes place at high temperature. Heterogeneous catalytic Combustion (HCC) principle is promising technologies towards environmental friendly energy production should be developed to ensure higher yields with lower pollutants gaseous emissions and perform complete combustion oxidation at moderate temperature condition as comparing to homogeneous high thermal combustion. Hence the principle has become a very interesting alternative total oxidation for the treatment of pollutants gaseous emission especially NOX product formation. Noble metals are dispersed on a support-porous HCC such as γ- Al2O3, TiO2 and ThO2 to increase thermal stability of catalyst and to increase to effectiveness of catalytic combustion. Support-porous HCC material to be selected based on factors of the surface area, porosity, thermal stability, thermal conductivity, reactivity with reactants or products, chemical stability, catalytic activity, and catalyst life. γ- Al2O3 with high catalytic activity and can last longer life of catalyst, is commonly used as the support for Pd catalyst at low temperatures. Sustainable and renewable support-material of bio-mass char was derived from agro-industrial waste material and used to compare with those the conventional support-porous material. The abundant of biomass wastes generated in palm oil industries is one potential source to convert the wastes into sustainable material as replacement of support material for catalysts. Objective of this study was to compare the kinetic rate of reaction the combustion of methane on Palladium (Pd) based catalyst with Al2O3 support and bio-char (Bc) support derived from shell kernel. The 2wt% Pd was prepared using incipient wetness impregnation method and the HCC performance was accomplished using tubular quartz reactor with gas mixture ratio of 3% methane and 97% air. Material characterization was determined using TGA, SEM, and BET surface area. The methane porous-HCC conversion was carried out by online gas analyzer connected to the reactor that performed porous-HCC. BET surface area for prepared 2 wt% Pd/Bc is smaller than prepared 2wt% Pd/ Al2O3 due to its low porosity between particles. The order of catalyst activity based on kinetic rate on reaction of catalysts in low temperature is prepared 2wt% Pd/Bc > calcined 2wt% Pd/ Al2O3 > prepared 2wt% Pd/ Al2O3 > calcined 2wt% Pd/Bc. Hence the usage of agro-industrial bio-mass waste material can enhance the sustainability principle.

Keywords: catalytic-combustion, environmental, support-bio-char material, sustainable and renewable material

Procedia PDF Downloads 386
1806 An Optimal Control Method for Reconstruction of Topography in Dam-Break Flows

Authors: Alia Alghosoun, Nabil El Moçayd, Mohammed Seaid

Abstract:

Modeling dam-break flows over non-flat beds requires an accurate representation of the topography which is the main source of uncertainty in the model. Therefore, developing robust and accurate techniques for reconstructing topography in this class of problems would reduce the uncertainty in the flow system. In many hydraulic applications, experimental techniques have been widely used to measure the bed topography. In practice, experimental work in hydraulics may be very demanding in both time and cost. Meanwhile, computational hydraulics have served as an alternative for laboratory and field experiments. Unlike the forward problem, the inverse problem is used to identify the bed parameters from the given experimental data. In this case, the shallow water equations used for modeling the hydraulics need to be rearranged in a way that the model parameters can be evaluated from measured data. However, this approach is not always possible and it suffers from stability restrictions. In the present work, we propose an adaptive optimal control technique to numerically identify the underlying bed topography from a given set of free-surface observation data. In this approach, a minimization function is defined to iteratively determine the model parameters. The proposed technique can be interpreted as a fractional-stage scheme. In the first stage, the forward problem is solved to determine the measurable parameters from known data. In the second stage, the adaptive control Ensemble Kalman Filter is implemented to combine the optimality of observation data in order to obtain the accurate estimation of the topography. The main features of this method are on one hand, the ability to solve for different complex geometries with no need for any rearrangements in the original model to rewrite it in an explicit form. On the other hand, its achievement of strong stability for simulations of flows in different regimes containing shocks or discontinuities over any geometry. Numerical results are presented for a dam-break flow problem over non-flat bed using different solvers for the shallow water equations. The robustness of the proposed method is investigated using different numbers of loops, sensitivity parameters, initial samples and location of observations. The obtained results demonstrate high reliability and accuracy of the proposed techniques.

Keywords: erodible beds, finite element method, finite volume method, nonlinear elasticity, shallow water equations, stresses in soil

Procedia PDF Downloads 126
1805 Lessons Learned from Interlaboratory Noise Modelling in Scope of Environmental Impact Assessments in Slovenia

Authors: S. Cencek, A. Markun

Abstract:

Noise assessment methods are regularly used in scope of Environmental Impact Assessments for planned projects to assess (predict) the expected noise emissions of these projects. Different noise assessment methods could be used. In recent years, we had an opportunity to collaborate in some noise assessment procedures where noise assessments of different laboratories have been performed simultaneously. We identified some significant differences in noise assessment results between laboratories in Slovenia. We estimate that despite good input Georeferenced Data to set up acoustic model exists in Slovenia; there is no clear consensus on methods for predictive noise methods for planned projects. We analyzed input data, methods and results of predictive noise methods for two planned industrial projects, both were done independently by two laboratories. We also analyzed the data, methods and results of two interlaboratory collaborative noise models for two existing noise sources (railway and motorway). In cases of predictive noise modelling, the validations of acoustic models were performed by noise measurements of surrounding existing noise sources, but in varying durations. The acoustic characteristics of existing buildings were also not described identically. The planned noise sources were described and digitized differently. Differences in noise assessment results between different laboratories have ranged up to 10 dBA, which considerably exceeds the acceptable uncertainty ranged between 3 to 6 dBA. Contrary to predictive noise modelling, in cases of collaborative noise modelling for two existing noise sources the possibility to perform the validation noise measurements of existing noise sources greatly increased the comparability of noise modelling results. In both cases of collaborative noise modelling for existing motorway and railway, the modelling results of different laboratories were comparable. Differences in noise modeling results between different laboratories were below 5 dBA, which was acceptable uncertainty set up by interlaboratory noise modelling organizer. The lessons learned from the study were: 1) Predictive noise calculation using formulae from International standard SIST ISO 9613-2: 1997 is not an appropriate method to predict noise emissions of planned projects since due to complexity of procedure they are not used strictly, 2) The noise measurements are important tools to minimize noise assessment errors of planned projects and should be in cases of predictive noise modelling performed at least for validation of acoustic model, 3) National guidelines should be made on the appropriate data, methods, noise source digitalization, validation of acoustic model etc. in order to unify the predictive noise models and their results in scope of Environmental Impact Assessments for planned projects.

Keywords: environmental noise assessment, predictive noise modelling, spatial planning, noise measurements, national guidelines

Procedia PDF Downloads 229
1804 Undirected Endo-Cayley Digraphs of Cyclic Groups of Order Primes

Authors: Chanon Promsakon, Sayan Panma

Abstract:

Let S be a finite semigroup, A a subset of S and f an endomorphism on S. The endo-Cayley digraph of a semigroup S corresponding to a connecting set A and an endomorphism f, denoted by endo − Cayf (S, A) is a digraph whose vertex set is S and a vertex u is adjacent to a vertex v if and only if v = f(u)a for some a ∈ A. A digraph D is called undirected if any edge uv in D, there exists an edge vu in D. We consider the undirectedness of an endo-Cayley of a cyclic group of order prime, Zp. In this work, we investigate conditions for connecting sets and endomorphisms to make endo-Cayley digraphs of cyclic groups of order primes be undirected. Moreover, we give some conditions for an undirected endo-Cayley of cycle group of any order.

Keywords: endo-Cayley graph, undirected digraphs, cyclic groups, endomorphism

Procedia PDF Downloads 343
1803 From Mathematics Project-Based Learning to Commercial Product Using Geometer’s Sketchpad (GSP)

Authors: Krongthong Khairiree

Abstract:

The purpose of this research study is to explore mathematics project-based learning approach and the use of technology in the context of school mathematics in Thailand. Data of the study were collected from 6 sample secondary schools and the students were 6-14 years old. Research findings show that through mathematics project-based learning approach and the use of GSP, students were able to make mathematics learning fun and challenging. Based on the students’ interviews they revealed that, with GSP, they were able to visualize and create graphical representations, which will enable them to develop their mathematical thinking skills, concepts and understanding. The students had fun in creating variety of graphs of functions which they can not do by drawing on graph paper. In addition, there are evidences to show the students’ abilities in connecting mathematics to real life outside the classroom and commercial products, such as weaving, patterning of broomstick, and ceramics design.

Keywords: mathematics, project-based learning, Geometer’s Sketchpad (GSP), commercial products

Procedia PDF Downloads 331
1802 Determination of Optical Constants of Semiconductor Thin Films by Ellipsometry

Authors: Aïssa Manallah, Mohamed Bouafia

Abstract:

Ellipsometry is an optical method based on the study of the behavior of polarized light. The light reflected on a surface induces a change in the polarization state which depends on the characteristics of the material (complex refractive index and thickness of the different layers constituting the device). The purpose of this work is to determine the optical properties of semiconductor thin films by ellipsometry. This paper describes the experimental aspects concerning the semiconductor samples, the SE400 ellipsometer principle, and the results obtained by direct measurements of ellipsometric parameters and modelling using appropriate software.

Keywords: ellipsometry, optical constants, semiconductors, thin films

Procedia PDF Downloads 301
1801 Cost-Effective and Optimal Control Analysis for Mitigation Strategy to Chocolate Spot Disease of Faba Bean

Authors: Haileyesus Tessema Alemneh, Abiyu Enyew Molla, Oluwole Daniel Makinde

Abstract:

Introduction: Faba bean is one of the most important grown plants worldwide for humans and animals. Several biotic and abiotic elements have limited the output of faba beans, irrespective of their diverse significance. Many faba bean pathogens have been reported so far, of which the most important yield-limiting disease is chocolate spot disease (Botrytis fabae). The dynamics of disease transmission and decision-making processes for intervention programs for disease control are now better understood through the use of mathematical modeling. Currently, a lot of mathematical modeling researchers are interested in plant disease modeling. Objective: In this paper, a deterministic mathematical model for chocolate spot disease (CSD) on faba bean plant with an optimal control model was developed and analyzed to examine the best strategy for controlling CSD. Methodology: Three control interventions, quarantine (u2), chemical control (u3), and prevention (u1), are employed that would establish the optimal control model. The optimality system, characterization of controls, the adjoint variables, and the Hamiltonian are all generated employing Pontryagin’s maximum principle. A cost-effective approach is chosen from a set of possible integrated strategies using the incremental cost-effectiveness ratio (ICER). The forward-backward sweep iterative approach is used to run numerical simulations. Results: The Hamiltonian, the optimality system, the characterization of the controls, and the adjoint variables were established. The numerical results demonstrate that each integrated strategy can reduce the diseases within the specified period. However, due to limited resources, an integrated strategy of prevention and uprooting was found to be the best cost-effective strategy to combat CSD. Conclusion: Therefore, attention should be given to the integrated cost-effective and environmentally eco-friendly strategy by stakeholders and policymakers to control CSD and disseminate the integrated intervention to the farmers in order to fight the spread of CSD in the Faba bean population and produce the expected yield from the field.

Keywords: CSD, optimal control theory, Pontryagin’s maximum principle, numerical simulation, cost-effectiveness analysis

Procedia PDF Downloads 77
1800 Layouting Phase II of New Priok Using Adaptive Port Planning Frameworks

Authors: Mustarakh Gelfi, Tiedo Vellinga, Poonam Taneja, Delon Hamonangan

Abstract:

The development of New Priok/Kalibaru as an expansion terminal of the old port has been being done by IPC (Indonesia Port Cooperation) together with the subsidiary company, Port Developer (PT Pengembangan Pelabuhan Indonesia). As stated in the master plan, from 2 phases that had been proposed, phase I has shown its form and even Container Terminal I has been operated in 2016. It was planned principally, the development will be divided into Phase I (2013-2018) consist of 3 container terminals and 2 product terminals and Phase II (2018-2023) consist of 4 container terminals. In fact, the master plan has to be changed due to some major uncertainties which were escaped in prediction. This study is focused on the design scenario of phase II (2035- onwards) to deal with future uncertainty. The outcome is the robust design of phase II of the Kalibaru Terminal taking into account the future changes. Flexibility has to be a major goal in such a large infrastructure project like New Priok in order to deal and manage future uncertainty. The phasing of project needs to be adapted and re-look frequently before being irrelevant to future challenges. One of the frameworks that have been developed by an expert in port planning is Adaptive Port Planning (APP) with scenario-based planning. The idea behind APP framework is the adaptation that might be needed at any moment as an answer to a challenge. It is a continuous procedure that basically aims to increase the lifespan of waterborne transport infrastructure by increasing flexibility in the planning, contracting and design phases. Other methods used in this study are brainstorming with the port authority, desk study, interview and site visit to the real project. The result of the study is expected to be the insight for the port authority of Tanjung Priok over the future look and how it will impact the design of the port. There will be guidelines to do the design in an uncertain environment as well. Solutions of flexibility can be divided into: 1 - Physical solutions, all the items related hard infrastructure in the projects. The common things in this type of solution are using modularity, standardization, multi-functional, shorter and longer design lifetime, reusability, etc. 2 - Non-physical solutions, usually related to the planning processes, decision making and management of the projects. To conclude, APP framework seems quite robust to deal with the problem of designing phase II of New Priok Project for such a long period.

Keywords: Indonesia port, port's design, port planning, scenario-based planning

Procedia PDF Downloads 231
1799 Web Application for Evaluating Tests in Distance Learning Systems

Authors: Bogdan Walek, Vladimir Bradac, Radim Farana

Abstract:

Distance learning systems offer useful methods of learning and usually contain final course test or another form of test. The paper proposes web application for evaluating tests using expert system in distance learning systems. Proposed web application is appropriate for didactic tests or tests with results for subsequent studying follow-up courses. Web application works with test questions and uses expert system and LFLC tool for test evaluation. After test evaluation the results are visualized and shown to student.

Keywords: distance learning, test, uncertainty, fuzzy, expert system, student

Procedia PDF Downloads 480
1798 Efficiency, Effectiveness, and Technological Change in Armed Forces: Indonesian Case

Authors: Citra Pertiwi, Muhammad Fikruzzaman Rahawarin

Abstract:

Government of Indonesia had committed to increasing its national defense the budget up to 1,5 percent of GDP. However, the budget increase does not necessarily allocate efficiently and effectively. Using Data Envelopment Analysis (DEA), the operational units of Indonesian Armed Forces are considered as a proxy to measure those two aspects. The bootstrap technique is being used as well to reduce uncertainty in the estimation. Additionally, technological change is being measured as a nonstationary component. Nearly half of the units are being estimated as fully efficient, with less than a third is considered as effective. Longer and larger sets of data might increase the robustness of the estimation in the future.

Keywords: bootstrap, effectiveness, efficiency, DEA, military, Malmquist, technological change

Procedia PDF Downloads 300
1797 Expression of Micro-RNA268 in Zinc Deficient Rice

Authors: Sobia Shafqat, Saeed Ahmad Qaisrani

Abstract:

MicroRNAs play an essential role in the regulation and development of all processes in most eukaryotes because of their prospective part as mediators controlling cell growth and differentiation towards the exact position of RNAs response in plants under biotic and abiotic factors or stressors. In a few cases, Zn is oblivious poisonous for plants due to its heavy metal status. Some other metals are extremely toxic, like Cd, Hg, and Pb, but these elements require in rice for the programming of genes under abiotic stress resembling Zn stress when micro RNAs268 was importantly introduced in rice. The micro RNAs overexpressed in transgenic plants with an accumulation of a large amount of melanin dialdehyde, hydrogen peroxide, and an excessive quantity of Zn in the seedlings stage. Let out results for rice pliability under Zn stress micro RNAs act as negative controllers. But the role of micro RNA268 act as a modulator in different ecological condition. It has been explained clearly with a long understanding of the role of micro RNA268 under stress conditions; pliability and practically showed outcome to increase plant sufferance under Zn stress because micro RNAs is an intervention technique for gene regulation in gene expression. The proposed study was experimented with by using genetic factors of Zn stress and toxicity effect on rice plants done at District Vehari, Pakistan. The trial was performed randomly with three replications in a complete block design (RCBD). These blocks were controlled with different concentrations of genetic factors. By overexpression of micro RNA268 rice, seedling growth was not stopped under Zn deficiency due to the accumulation of a large amount of melanin dialdehyde, hydrogen peroxide, and an excessive quantity of Zn in their seedlings. Results showed that micro RNA268 act as a negative controller under Zn stress. In the end, under stress conditions, micro RNA268 showed the necessary function in the tolerance of rice plants. The directorial work sketch gave out high agronomic applications and yield outcomes in rice with a specific amount of Zn application.

Keywords: micro RNA268, zinc, rice, agronomic approach

Procedia PDF Downloads 57
1796 Transition Dynamic Analysis of the Urban Disparity in Iran “Case Study: Iran Provinces Center”

Authors: Marzieh Ahmadi, Ruhullah Alikhan Gorgani

Abstract:

The usual methods of measuring regional inequalities can not reflect the internal changes of the country in terms of their displacement in different development groups, and the indicators of inequalities are not effective in demonstrating the dynamics of the distribution of inequality. For this purpose, this paper examines the dynamics of the urban inertial transport in the country during the period of 2006-2016 using the CIRD multidimensional index and stochastic kernel density method. it firstly selects 25 indicators in five dimensions including macroeconomic conditions, science and innovation, environmental sustainability, human capital and public facilities, and two-stage Principal Component Analysis methodology are developed to create a composite index of inequality. Then, in the second stage, using a nonparametric analytical approach to internal distribution dynamics and a stochastic kernel density method, the convergence hypothesis of the CIRD index of the Iranian provinces center is tested, and then, based on the ergodic density, long-run equilibrium is shown. Also, at this stage, for the purpose of adopting accurate regional policies, the distribution dynamics and process of convergence or divergence of the Iranian provinces for each of the five. According to the results of the first Stage, in 2006 & 2016, the highest level of development is related to Tehran and zahedan is at the lowest level of development. The results show that the central cities of the country are at the highest level of development due to the effects of Tehran's knowledge spillover and the country's lower cities are at the lowest level of development. The main reason for this may be the lack of access to markets in the border provinces. Based on the results of the second stage, which examines the dynamics of regional inequality transmission in the country during 2006-2016, the first year (2006) is not multifaceted and according to the kernel density graph, the CIRD index of about 70% of the cities. The value is between -1.1 and -0.1. The rest of the sequence on the right is distributed at a level higher than -0.1. In the kernel distribution, a convergence process is observed and the graph points to a single peak. Tends to be a small peak at about 3 but the main peak at about-0.6. According to the chart in the final year (2016), the multidimensional pattern remains and there is no mobility in the lower level groups, but at the higher level, the CIRD index accounts for about 45% of the provinces at about -0.4 Take it. That this year clearly faces the twin density pattern, which indicates that the cities tend to be closely related to each other in terms of development, so that the cities are low in terms of development. Also, according to the distribution dynamics results, the provinces of Iran follow the single-density density pattern in 2006 and the double-peak density pattern in 2016 at low and moderate inequality index levels and also in the development index. The country diverges during the years 2006 to 2016.

Keywords: Urban Disparity, CIRD Index, Convergence, Distribution Dynamics, Random Kernel Density

Procedia PDF Downloads 121
1795 The Introduction of the Revolution Einstein’s Relative Energy Equations in Even 2n and Odd 3n Light Dimension Energy States Systems

Authors: Jiradeach Kalayaruan, Tosawat Seetawan

Abstract:

This paper studied the energy of the nature systems by looking at the overall image throughout the universe. The energy of the nature systems was developed from the Einstein’s energy equation. The researcher used the new ideas called even 2n and odd 3n light dimension energy states systems, which were developed from Einstein’s relativity energy theory equation. In this study, the major methodology the researchers used was the basic principle ideas or beliefs of some religions such as Buddhism, Christianity, Hinduism, Islam, or Tao in order to get new discoveries. The basic beliefs of each religion - Nivara, God, Ether, Atman, and Tao respectively, were great influential ideas on the researchers to use them greatly in the study to form new ideas from philosophy. Since the philosophy of each religion was alive with deep insight of the physical nature relative energy, it connected the basic beliefs to light dimension energy states systems. Unfortunately, Einstein’s original relative energy equation showed only even 2n light dimension energy states systems (if n = 1,…,∞). But in advance ideas, the researchers multiplied light dimension energy by Einstein’s original relative energy equation and get new idea of theoritical physics in odd 3n light dimension energy states systems (if n = 1,…,∞). Because from basic principle ideas or beliefs of some religions philosophy of each religion, you had to add the media light dimension energy into Einstein’s original relative energy equation. Consequently, the simple meaning picture in deep insight showed that you could touch light dimension energy of Nivara, God, Ether, Atman, and Tao by light dimension energy. Since light dimension energy was transferred by Nivara, God, Ether, Atman and Tao, the researchers got the new equation of odd 3n light dimension energy states systems. Moreover, the researchers expected to be able to solve overview problems of all light dimension energy in all nature relative energy, which are developed from Eistein’s relative energy equation.The finding of the study was called 'super nature relative energy' ( in odd 3n light dimension energy states systems (if n = 1,…,∞)). From the new ideas above you could do the summation of even 2n and odd 3n light dimension energy states systems in all of nature light dimension energy states systems. In the future time, the researchers will expect the new idea to be used in insight theoretical physics, which is very useful to the development of quantum mechanics, all engineering, medical profession, transportation, communication, scientific inventions, and technology, etc.

Keywords: 2n light dimension energy states systems effect, Ether, even 2n light dimension energy states systems, nature relativity, Nivara, odd 3n light dimension energy states systems, perturbation points energy, relax point energy states systems, stress perturbation energy states systems effect, super relative energy

Procedia PDF Downloads 339
1794 Multimedia Design in Tactical Play Learning and Acquisition for Elite Gaelic Football Practitioners

Authors: Michael McMahon

Abstract:

The use of media (video/animation/graphics) has long been used by athletes, coaches, and sports scientists to analyse and improve performance in technical skills and team tactics. Sports educators are increasingly open to the use of technology to support coach and learner development. However, an overreliance is a concern., This paper is part of a larger Ph.D. study looking into these new challenges for Sports Educators. Most notably, how to exploit the deep-learning potential of Digital Media among expert learners, how to instruct sports educators to create effective media content that fosters deep learning, and finally, how to make the process manageable and cost-effective. Central to the study is Richard Mayers Cognitive Theory of Multimedia Learning. Mayers Multimedia Learning Theory proposes twelve principles that shape the design and organization of multimedia presentations to improve learning and reduce cognitive load. For example, the Prior Knowledge principle suggests and highlights different learning outcomes for Novice and Non-Novice learners, respectively. Little research, however, is available to support this principle in modified domains (e.g., sports tactics and strategy). As a foundation for further research, this paper compares and contrasts a range of contemporary multimedia sports coaching content and assesses how they perform as learning tools for Strategic and Tactical Play Acquisition among elite sports practitioners. The stress tests applied are guided by Mayers's twelve Multimedia Learning Principles. The focus is on the elite athletes and whether current coaching digital media content does foster improved sports learning among this cohort. The sport of Gaelic Football was selected as it has high strategic and tactical play content, a wide range of Practitioner skill levels (Novice to Elite), and also a significant volume of Multimedia Coaching Content available for analysis. It is hoped the resulting data will help identify and inform the future instructional content design and delivery for Sports Practitioners and help promote best design practices optimal for different levels of expertise.

Keywords: multimedia learning, e-learning, design for learning, ICT

Procedia PDF Downloads 99
1793 An Optimization Model for Maximum Clique Problem Based on Semidefinite Programming

Authors: Derkaoui Orkia, Lehireche Ahmed

Abstract:

The topic of this article is to exploring the potentialities of a powerful optimization technique, namely Semidefinite Programming, for solving NP-hard problems. This approach provides tight relaxations of combinatorial and quadratic problems. In this work, we solve the maximum clique problem using this relaxation. The clique problem is the computational problem of finding cliques in a graph. It is widely acknowledged for its many applications in real-world problems. The numerical results show that it is possible to find a maximum clique in polynomial time, using an algorithm based on semidefinite programming. We implement a primal-dual interior points algorithm to solve this problem based on semidefinite programming. The semidefinite relaxation of this problem can be solved in polynomial time.

Keywords: semidefinite programming, maximum clique problem, primal-dual interior point method, relaxation

Procedia PDF Downloads 216
1792 Post-Quantum Resistant Edge Authentication in Large Scale Industrial Internet of Things Environments Using Aggregated Local Knowledge and Consistent Triangulation

Authors: C. P. Autry, A. W. Roscoe, Mykhailo Magal

Abstract:

We discuss the theoretical model underlying 2BPA (two-band peer authentication), a practical alternative to conventional authentication of entities and data in IoT. In essence, this involves assembling a virtual map of authentication assets in the network, typically leading to many paths of confirmation between any pair of entities. This map is continuously updated, confirmed, and evaluated. The value of authentication along multiple disjoint paths becomes very clear, and we require analogues of triangulation to extend authentication along extended paths and deliver it along all possible paths. We discover that if an attacker wants to make an honest node falsely believe she has authenticated another, then the length of the authentication paths is of little importance. This is because optimal attack strategies correspond to minimal cuts in the authentication graph and do not contain multiple edges on the same path. The authentication provided by disjoint paths normally is additive (in entropy).

Keywords: authentication, edge computing, industrial IoT, post-quantum resistance

Procedia PDF Downloads 192
1791 Consensus-Oriented Analysis Model for Knowledge Management Failure Evaluation in Uncertain Environment

Authors: Amir Ghasem Norouzi, Mahdi Zowghi

Abstract:

This study propose a framework based on the fuzzy T-Norms, T-conorm, a novel operator, and multi-expert approach to help organizations build awareness of the critical influential factors on the success of knowledge management (KM) implementation, analysis the failure of knowledge management. This study considers the complex uncertainty concept that is in knowledge management implementing capability (KMIC) and it is used by fuzzy logic for this reason. The contribution of our paper is shown with an empirical study in a nonprofit educational organization evaluation.

Keywords: fuzzy logic, knowledge management, multi expert analysis, consensus oriented average operator

Procedia PDF Downloads 621
1790 Revolutionary Solutions for Modeling and Visualization of Complex Software Systems

Authors: Jay Xiong, Li Lin

Abstract:

Existing software modeling and visualization approaches using UML are outdated, which are outcomes of reductionism and the superposition principle that the whole of a system is the sum of its parts, so that with them all tasks of software modeling and visualization are performed linearly, partially, and locally. This paper introduces revolutionary solutions for modeling and visualization of complex software systems, which make complex software systems much easy to understand, test, and maintain. The solutions are based on complexity science, offering holistic, automatic, dynamic, virtual, and executable approaches about thousand times more efficient than the traditional ones.

Keywords: complex systems, software maintenance, software modeling, software visualization

Procedia PDF Downloads 394
1789 Impacts of Artificial Intelligence on the Doctor-Patient Relationship: Ethical Principles, Informed Consent and Medical Obligation

Authors: Rafaella Nogaroli

Abstract:

It is presented hypothetical cases in the context of AI algorithms to support clinical decisions, in order to discuss the importance of doctors to respect AI ethical principles. Regarding the principle of transparency and explanation, there is an impact on the new model of patient consent and on the understanding of qualified information. Besides, the human control of technology (AI as a tool) should guide the physician's activity; otherwise, he breaks the patient's legitimate expectation in a specific result, with the consequent transformation of the medical obligation nature.

Keywords: medical law, artificial intelligence, ethical principles, patient´s informed consent, medical obligations

Procedia PDF Downloads 93
1788 An Improved Method to Compute Sparse Graphs for Traveling Salesman Problem

Authors: Y. Wang

Abstract:

The Traveling salesman problem (TSP) is NP-hard in combinatorial optimization. The research shows the algorithms for TSP on the sparse graphs have the shorter computation time than those for TSP according to the complete graphs. We present an improved iterative algorithm to compute the sparse graphs for TSP by frequency graphs computed with frequency quadrilaterals. The iterative algorithm is enhanced by adjusting two parameters of the algorithm. The computation time of the algorithm is O(CNmaxn2) where C is the iterations, Nmax is the maximum number of frequency quadrilaterals containing each edge and n is the scale of TSP. The experimental results showed the computed sparse graphs generally have less than 5n edges for most of these Euclidean instances. Moreover, the maximum degree and minimum degree of the vertices in the sparse graphs do not have much difference. Thus, the computation time of the methods to resolve the TSP on these sparse graphs will be greatly reduced.

Keywords: frequency quadrilateral, iterative algorithm, sparse graph, traveling salesman problem

Procedia PDF Downloads 227
1787 Scattering Operator and Spectral Clustering for Ultrasound Images: Application on Deep Venous Thrombi

Authors: Thibaud Berthomier, Ali Mansour, Luc Bressollette, Frédéric Le Roy, Dominique Mottier, Léo Fréchier, Barthélémy Hermenault

Abstract:

Deep Venous Thrombosis (DVT) occurs when a thrombus is formed within a deep vein (most often in the legs). This disease can be deadly if a part or the whole thrombus reaches the lung and causes a Pulmonary Embolism (PE). This disorder, often asymptomatic, has multifactorial causes: immobilization, surgery, pregnancy, age, cancers, and genetic variations. Our project aims to relate the thrombus epidemiology (origins, patient predispositions, PE) to its structure using ultrasound images. Ultrasonography and elastography were collected using Toshiba Aplio 500 at Brest Hospital. This manuscript compares two classification approaches: spectral clustering and scattering operator. The former is based on the graph and matrix theories while the latter cascades wavelet convolutions with nonlinear modulus and averaging operators.

Keywords: deep venous thrombosis, ultrasonography, elastography, scattering operator, wavelet, spectral clustering

Procedia PDF Downloads 473
1786 Generative Syntaxes: Macro-Heterophony and the Form of ‘Synchrony’

Authors: Luminiţa Duţică, Gheorghe Duţică

Abstract:

One of the most powerful language innovation in the twentieth century music was the heterophony–hypostasis of the vertical syntax entered into the sphere of interest of many composers, such as George Enescu, Pierre Boulez, Mauricio Kagel, György Ligeti and others. The heterophonic syntax has a history of its growth, which means a succession of different concepts and writing techniques. The trajectory of settling this phenomenon does not necessarily take into account the chronology: there are highly complex primary stages and advanced stages of returning to the simple forms of writing. In folklore, the plurimelodic simultaneities are free or random and originate from the (unintentional) differences/‘deviations’ from the state of unison, through a variety of ornaments, melismas, imitations, elongations and abbreviations, all in a flexible rhythmic and non-periodic/immeasurable framework, proper to the parlando-rubato rhythmics. Within the general framework of the multivocal organization, the heterophonic syntax in elaborate (academic) version has imposed itself relatively late compared with polyphony and homophony. Of course, the explanation is simple, if we consider the causal relationship between the sound vocabulary elements – in this case, the modalism – and the typologies of vertical organization appropriate for it. Therefore, adding up the ‘classic’ pathway of the writing typologies (monody – polyphony – homophony), heterophony - applied equally to the structures of modal, serial or synthesis vocabulary – reclaims necessarily an own macrotemporal form, in the sense of the analogies enshrined by the evolution of the musical styles and languages: polyphony→fugue, homophony→sonata. Concerned about the prospect of edifying a new musical ontology, the composer Ştefan Niculescu experienced – along with the mathematical organization of heterophony according to his own original methods – the possibility of extrapolation of this phenomenon in macrostructural plan, reaching this way to the unique form of ‘synchrony’. Founded on coincidentia oppositorum principle (involving the ‘one-multiple’ binom), the sound architecture imagined by Ştefan Niculescu consists in one (temporal) model / algorithm of articulation of two sound states: 1. monovocality state (principle of identity) and 2. multivocality state (principle of difference). In this context, the heterophony becomes an (auto)generative mechanism, with macrotemporal amplitude, strategy that will be grown by the composer, practically throughout his creation (see the works: Ison I, Ison II, Unisonos I, Unisonos II, Duplum, Triplum, Psalmus, Héterophonies pour Montreux (Homages to Enescu and Bartók etc.). For the present demonstration, we selected one of the most edifying works of Ştefan Niculescu – Simphony II, Opus dacicum – where the form of (heterophony-)synchrony acquires monumental-symphonic features, representing an emblematic case for the complexity level achieved by this type of vertical syntax in the twentieth century music.

Keywords: heterophony, modalism, serialism, synchrony, syntax

Procedia PDF Downloads 336
1785 Citation Analysis of New Zealand Court Decisions

Authors: Tobias Milz, L. Macpherson, Varvara Vetrova

Abstract:

The law is a fundamental pillar of human societies as it shapes, controls and governs how humans conduct business, behave and interact with each other. Recent advances in computer-assisted technologies such as NLP, data science and AI are creating opportunities to support the practice, research and study of this pervasive domain. It is therefore not surprising that there has been an increase in investments into supporting technologies for the legal industry (also known as “legal tech” or “law tech”) over the last decade. A sub-discipline of particular appeal is concerned with assisted legal research. Supporting law researchers and practitioners to retrieve information from the vast amount of ever-growing legal documentation is of natural interest to the legal research community. One tool that has been in use for this purpose since the early nineteenth century is legal citation indexing. Among other use cases, they provided an effective means to discover new precedent cases. Nowadays, computer-assisted network analysis tools can allow for new and more efficient ways to reveal the “hidden” information that is conveyed through citation behavior. Unfortunately, access to openly available legal data is still lacking in New Zealand and access to such networks is only commercially available via providers such as LexisNexis. Consequently, there is a need to create, analyze and provide a legal citation network with sufficient data to support legal research tasks. This paper describes the development and analysis of a legal citation Network for New Zealand containing over 300.000 decisions from 125 different courts of all areas of law and jurisdiction. Using python, the authors assembled web crawlers, scrapers and an OCR pipeline to collect and convert court decisions from openly available sources such as NZLII into uniform and machine-readable text. This facilitated the use of regular expressions to identify references to other court decisions from within the decision text. The data was then imported into a graph-based database (Neo4j) with the courts and their respective cases represented as nodes and the extracted citations as links. Furthermore, additional links between courts of connected cases were added to indicate an indirect citation between the courts. Neo4j, as a graph-based database, allows efficient querying and use of network algorithms such as PageRank to reveal the most influential/most cited courts and court decisions over time. This paper shows that the in-degree distribution of the New Zealand legal citation network resembles a power-law distribution, which indicates a possible scale-free behavior of the network. This is in line with findings of the respective citation networks of the U.S. Supreme Court, Austria and Germany. The authors of this paper provide the database as an openly available data source to support further legal research. The decision texts can be exported from the database to be used for NLP-related legal research, while the network can be used for in-depth analysis. For example, users of the database can specify the network algorithms and metrics to only include specific courts to filter the results to the area of law of interest.

Keywords: case citation network, citation analysis, network analysis, Neo4j

Procedia PDF Downloads 99
1784 Seismic Fragility Curves Methodologies for Bridges: A Review

Authors: Amirmozafar Benshams, Khatere Kashmari, Farzad Hatami, Mesbah Saybani

Abstract:

As a part of the transportation network, bridges are one of the most vulnerable structures. In order to investigate the vulnerability and seismic evaluation of bridges performance, identifying of bridge associated with various state of damage is important. Fragility curves provide important data about damage states and performance of bridges against earthquakes. The development of vulnerability information in the form of fragility curves is a widely practiced approach when the information is to be developed accounting for a multitude of uncertain source involved. This paper presents the fragility curve methodologies for bridges and investigates the practice and applications relating to the seismic fragility assessment of bridges.

Keywords: fragility curve, bridge, uncertainty, NLTHA, IDA

Procedia PDF Downloads 275
1783 Considering Uncertainties of Input Parameters on Energy, Environmental Impacts and Life Cycle Costing by Monte Carlo Simulation in the Decision Making Process

Authors: Johannes Gantner, Michael Held, Matthias Fischer

Abstract:

The refurbishment of the building stock in terms of energy supply and efficiency is one of the major challenges of the German turnaround in energy policy. As the building sector accounts for 40% of Germany’s total energy demand, additional insulation is key for energy efficient refurbished buildings. Nevertheless the energetic benefits often the environmental and economic performances of insulation materials are questioned. The methods Life Cycle Assessment (LCA) as well as Life Cycle Costing (LCC) can form the standardized basis for answering this doubts and more and more become important for material producers due efforts such as Product Environmental Footprint (PEF) or Environmental Product Declarations (EPD). Due to increasing use of LCA and LCC information for decision support the robustness and resilience of the results become crucial especially for support of decision and policy makers. LCA and LCC results are based on respective models which depend on technical parameters like efficiencies, material and energy demand, product output, etc.. Nevertheless, the influence of parameter uncertainties on lifecycle results are usually not considered or just studied superficially. Anyhow the effect of parameter uncertainties cannot be neglected. Based on the example of an exterior wall the overall lifecycle results are varying by a magnitude of more than three. As a result simple best case worst case analyses used in practice are not sufficient. These analyses allow for a first rude view on the results but are not taking effects into account such as error propagation. Thereby LCA practitioners cannot provide further guidance for decision makers. Probabilistic analyses enable LCA practitioners to gain deeper understanding of the LCA and LCC results and provide a better decision support. Within this study, the environmental and economic impacts of an exterior wall system over its whole lifecycle are illustrated, and the effect of different uncertainty analysis on the interpretation in terms of resilience and robustness are shown. Hereby the approaches of error propagation and Monte Carlo Simulations are applied and combined with statistical methods in order to allow for a deeper understanding and interpretation. All in all this study emphasis the need for a deeper and more detailed probabilistic evaluation based on statistical methods. Just by this, misleading interpretations can be avoided, and the results can be used for resilient and robust decisions.

Keywords: uncertainty, life cycle assessment, life cycle costing, Monte Carlo simulation

Procedia PDF Downloads 282
1782 Entrepreneurship under the Effect of Information Technology

Authors: Mohammad Hadi Khorashadi Zadeh ‎

Abstract:

An entrepreneur is a manager or the owner of the commercial company that creates resources and money by risking and initiative. The Netpreneur is the capability to run an online business. It needs only the Connectivity. An Entrepreneur, as long as he has a service which the market demands can set up a feasible and viable trade with his Intellectual Capital as the principle input and the Connectivity Infrastructure as the only physical input. The internet is possibly the most significant revolution in science and technology that our generation could fantasize or imagine. It has introduced in various benefits to the society, culture, economics and politics. The entrepreneur is a premium member in the community. She/he provides services to the society and community including employment.

Keywords: entrepreneur, Netpreneur, intellectual capital, infrastructure

Procedia PDF Downloads 314
1781 Research of Data Cleaning Methods Based on Dependency Rules

Authors: Yang Bao, Shi Wei Deng, WangQun Lin

Abstract:

This paper introduces the concept and principle of data cleaning, analyzes the types and causes of dirty data, and proposes several key steps of typical cleaning process, puts forward a well scalability and versatility data cleaning framework, in view of data with attribute dependency relation, designs several of violation data discovery algorithms by formal formula, which can obtain inconsistent data to all target columns with condition attribute dependent no matter data is structured (SQL) or unstructured (NoSQL), and gives 6 data cleaning methods based on these algorithms.

Keywords: data cleaning, dependency rules, violation data discovery, data repair

Procedia PDF Downloads 559