Search results for: dropping point
4238 The Capabilities of New Communication Devices in Development of Informing: Case Study Mobile Functions in Iran
Authors: Mohsen Shakerinejad
Abstract:
Due to the growing momentum of technology, the present age is called age of communication and information. And With Astounding progress of Communication and information tools, current world Is likened to the "global village". That a message can be sent from one point to another point of the world in a Time scale Less than a minute. However, one of the new sociologists -Alain Touraine- in describing the destructive effects of new changes arising from the development of information appliances refers to the "new fields for undemocratic social control And the incidence of acute and unrest social and political tensions", Yet, in this era That With the advancement of the industry, the life of people has been industrial too, quickly and accurately Data Transfer, Causes Blowing new life in the Body of Society And according to the features of each society and the progress of science and technology, Various tools should be used. One of these communication tools is Mobile. Cellular phone As Communication and telecommunication revolution in recent years, Has had a great influence on the individual and collective life of societies. This powerful communication tool Have had an Undeniable effect, On all aspects of life, including social, economic, cultural, scientific, etc. so that Ignoring It in Design, Implementation and enforcement of any system is not wise. Nowadays knowledge and information are one of the most important aspects of human life. Therefore, in this article, it has been tried to introduce mobile potentials in receive and transmit News and Information. As it follows, among the numerous capabilities of current mobile phones features such as sending text, photography, sound recording, filming, and Internet connectivity could indicate the potential of this medium of communication in the process of sending and receiving information. So that nowadays, mobile journalism as an important component of citizen journalism Has a unique role in information dissemination.Keywords: mobile, informing, receiving information, mobile journalism, citizen journalism
Procedia PDF Downloads 4104237 Application of Single Tuned Passive Filters in Distribution Networks at the Point of Common Coupling
Authors: M. Almutairi, S. Hadjiloucas
Abstract:
The harmonic distortion of voltage is important in relation to power quality due to the interaction between the large diffusion of non-linear and time-varying single-phase and three-phase loads with power supply systems. However, harmonic distortion levels can be reduced by improving the design of polluting loads or by applying arrangements and adding filters. The application of passive filters is an effective solution that can be used to achieve harmonic mitigation mainly because filters offer high efficiency, simplicity, and are economical. Additionally, possible different frequency response characteristics can work to achieve certain required harmonic filtering targets. With these ideas in mind, the objective of this paper is to determine what size single tuned passive filters work in distribution networks best, in order to economically limit violations caused at a given point of common coupling (PCC). This article suggests that a single tuned passive filter could be employed in typical industrial power systems. Furthermore, constrained optimization can be used to find the optimal sizing of the passive filter in order to reduce both harmonic voltage and harmonic currents in the power system to an acceptable level, and, thus, improve the load power factor. The optimization technique works to minimize voltage total harmonic distortions (VTHD) and current total harmonic distortions (ITHD), where maintaining a given power factor at a specified range is desired. According to the IEEE Standard 519, both indices are viewed as constraints for the optimal passive filter design problem. The performance of this technique will be discussed using numerical examples taken from previous publications.Keywords: harmonics, passive filter, power factor, power quality
Procedia PDF Downloads 3064236 Competitors’ Influence Analysis of a Retailer by Using Customer Value and Huff’s Gravity Model
Authors: Yepeng Cheng, Yasuhiko Morimoto
Abstract:
Customer relationship analysis is vital for retail stores, especially for supermarkets. The point of sale (POS) systems make it possible to record the daily purchasing behaviors of customers as an identification point of sale (ID-POS) database, which can be used to analyze customer behaviors of a supermarket. The customer value is an indicator based on ID-POS database for detecting the customer loyalty of a store. In general, there are many supermarkets in a city, and other nearby competitor supermarkets significantly affect the customer value of customers of a supermarket. However, it is impossible to get detailed ID-POS databases of competitor supermarkets. This study firstly focused on the customer value and distance between a customer's home and supermarkets in a city, and then constructed the models based on logistic regression analysis to analyze correlations between distance and purchasing behaviors only from a POS database of a supermarket chain. During the modeling process, there are three primary problems existed, including the incomparable problem of customer values, the multicollinearity problem among customer value and distance data, and the number of valid partial regression coefficients. The improved customer value, Huff’s gravity model, and inverse attractiveness frequency are considered to solve these problems. This paper presents three types of models based on these three methods for loyal customer classification and competitors’ influence analysis. In numerical experiments, all types of models are useful for loyal customer classification. The type of model, including all three methods, is the most superior one for evaluating the influence of the other nearby supermarkets on customers' purchasing of a supermarket chain from the viewpoint of valid partial regression coefficients and accuracy.Keywords: customer value, Huff's Gravity Model, POS, Retailer
Procedia PDF Downloads 1234235 Imaging 255nm Tungsten Thin Film Adhesion with Picosecond Ultrasonics
Authors: A. Abbas, X. Tridon, J. Michelon
Abstract:
In the electronic or in the photovoltaic industries, components are made from wafers which are stacks of thin film layers of a few nanometers to serval micrometers thickness. Early evaluation of the bounding quality between different layers of a wafer is one of the challenges of these industries to avoid dysfunction of their final products. Traditional pump-probe experiments, which have been developed in the 70’s, give a partial solution to this problematic but with a non-negligible drawback. In fact, on one hand, these setups can generate and detect ultra-high ultrasounds frequencies which can be used to evaluate the adhesion quality of wafer layers. But, on the other hand, because of the quiet long acquisition time they need to perform one measurement, these setups remain shut in punctual measurement to evaluate global sample quality. This last point can lead to bad interpretation of the sample quality parameters, especially in the case of inhomogeneous samples. Asynchronous Optical Sampling (ASOPS) systems can perform sample characterization with picosecond acoustics up to 106 times faster than traditional pump-probe setups. This last point allows picosecond ultrasonic to unlock the acoustic imaging field at the nanometric scale to detect inhomogeneities regarding sample mechanical properties. This fact will be illustrated by presenting an image of the measured acoustical reflection coefficients obtained by mapping, with an ASOPS setup, a 255nm thin-film tungsten layer deposited on a silicone substrate. Interpretation of the coefficient reflection in terms of bounding quality adhesion will also be exposed. Origin of zones which exhibit good and bad quality bounding will be discussed.Keywords: adhesion, picosecond ultrasonics, pump-probe, thin film
Procedia PDF Downloads 1594234 Detection of the Effectiveness of Training Courses and Their Limitations Using CIPP Model (Case Study: Isfahan Oil Refinery)
Authors: Neda Zamani
Abstract:
The present study aimed to investigate the effectiveness of training courses and their limitations using the CIPP model. The investigations were done on Isfahan Refinery as a case study. From a purpose point of view, the present paper is included among applied research and from a data gathering point of view, it is included among descriptive research of the field type survey. The population of the study included participants in training courses, their supervisors and experts of the training department. Probability-proportional-to-size (PPS) was used as the sampling method. The sample size for participants in training courses included 195 individuals, 30 supervisors and 11 individuals from the training experts’ group. To collect data, a questionnaire designed by the researcher and a semi-structured interview was used. The content validity of the data was confirmed by training management experts and the reliability was calculated through 0.92 Cronbach’s alpha. To analyze the data in descriptive statistics aspect (tables, frequency, frequency percentage and mean) were applied, and inferential statistics (Mann Whitney and Wilcoxon tests, Kruskal-Wallis test to determine the significance of the opinion of the groups) have been applied. Results of the study indicated that all groups, i.e., participants, supervisors and training experts, absolutely believe in the importance of training courses; however, participants in training courses regard content, teacher, atmosphere and facilities, training process, managing process and product as to be in a relatively appropriate level. The supervisors also regard output to be at a relatively appropriate level, but training experts regard content, teacher and managing processes as to be in an appropriate and higher than average level.Keywords: training courses, limitations of training effectiveness, CIPP model, Isfahan oil refinery company
Procedia PDF Downloads 754233 An Experimental Approach to the Influence of Tipping Points and Scientific Uncertainties in the Success of International Fisheries Management
Authors: Jules Selles
Abstract:
The Atlantic and Mediterranean bluefin tuna fishery have been considered as the archetype of an overfished and mismanaged fishery. This crisis has demonstrated the role of public awareness and the importance of the interactions between science and management about scientific uncertainties. This work aims at investigating the policy making process associated with a regional fisheries management organization. We propose a contextualized computer-based experimental approach, in order to explore the effects of key factors on the cooperation process in a complex straddling stock management setting. Namely, we analyze the effects of the introduction of a socio-economic tipping point and the uncertainty surrounding the estimation of the resource level. Our approach is based on a Gordon-Schaefer bio-economic model which explicitly represents the decision making process. Each participant plays the role of a stakeholder of ICCAT and represents a coalition of fishing nations involved in the fishery and decide unilaterally a harvest policy for the coming year. The context of the experiment induces the incentives for exploitation and collaboration to achieve common sustainable harvest plans at the Atlantic bluefin tuna stock scale. Our rigorous framework allows testing how stakeholders who plan the exploitation of a fish stock (a common pool resource) respond to two kinds of effects: i) the inclusion of a drastic shift in the management constraints (beyond a socio-economic tipping point) and ii) an increasing uncertainty in the scientific estimation of the resource level.Keywords: economic experiment, fisheries management, game theory, policy making, Atlantic Bluefin tuna
Procedia PDF Downloads 2534232 Mobile Network Users Amidst Ultra-Dense Networks in 5G Using an Improved Coordinated Multipoint (CoMP) Technology
Authors: Johnson O. Adeogo, Ayodele S. Oluwole, O. Akinsanmi, Olawale J. Olaluyi
Abstract:
In this 5G network, very high traffic density in densely populated areas, most especially in densely populated areas, is one of the key requirements. Radiation reduction becomes one of the major concerns to secure the future life of mobile network users in ultra-dense network areas using an improved coordinated multipoint technology. Coordinated Multi-Point (CoMP) is based on transmission and/or reception at multiple separated points with improved coordination among them to actively manage the interference for the users. Small cells have two major objectives: one, they provide good coverage and/or performance. Network users can maintain a good quality signal network by directly connecting to the cell. Two is using CoMP, which involves the use of multiple base stations (MBS) to cooperate by transmitting and/or receiving at the same time in order to reduce the possibility of electromagnetic radiation increase. Therefore, the influence of the screen guard with rubber condom on the mobile transceivers as one major piece of equipment radiating electromagnetic radiation was investigated by mobile network users amidst ultra-dense networks in 5g. The results were compared with the same mobile transceivers without screen guards and rubber condoms under the same network conditions. The 5 cm distance from the mobile transceivers was measured with the help of a ruler, and the intensity of Radio Frequency (RF) radiation was measured using an RF meter. The results show that the intensity of radiation from various mobile transceivers without screen guides and condoms was higher than the mobile transceivers with screen guides and condoms when call conversation was on at both ends.Keywords: ultra-dense networks, mobile network users, 5g, coordinated multi-point.
Procedia PDF Downloads 1034231 Classification on Statistical Distributions of a Complex N-Body System
Authors: David C. Ni
Abstract:
Contemporary models for N-body systems are based on temporal, two-body, and mass point representation of Newtonian mechanics. Other mainstream models include 2D and 3D Ising models based on local neighborhood the lattice structures. In Quantum mechanics, the theories of collective modes are for superconductivity and for the long-range quantum entanglement. However, these models are still mainly for the specific phenomena with a set of designated parameters. We are therefore motivated to develop a new construction directly from the complex-variable N-body systems based on the extended Blaschke functions (EBF), which represent a non-temporal and nonlinear extension of Lorentz transformation on the complex plane – the normalized momentum spaces. A point on the complex plane represents a normalized state of particle momentums observed from a reference frame in the theory of special relativity. There are only two key parameters, normalized momentum and nonlinearity for modelling. An algorithm similar to Jenkins-Traub method is adopted for solving EBF iteratively. Through iteration, the solution sets show a form of σ + i [-t, t], where σ and t are the real numbers, and the [-t, t] shows various distributions, such as 1-peak, 2-peak, and 3-peak etc. distributions and some of them are analog to the canonical distributions. The results of the numerical analysis demonstrate continuum-to-discreteness transitions, evolutional invariance of distributions, phase transitions with conjugate symmetry, etc., which manifest the construction as a potential candidate for the unification of statistics. We hereby classify the observed distributions on the finite convergent domains. Continuous and discrete distributions both exist and are predictable for given partitions in different regions of parameter-pair. We further compare these distributions with canonical distributions and address the impacts on the existing applications.Keywords: blaschke, lorentz transformation, complex variables, continuous, discrete, canonical, classification
Procedia PDF Downloads 3094230 Trinary Affinity—Mathematic Verification and Application (1): Construction of Formulas for the Composite and Prime Numbers
Authors: Liang Ming Zhong, Yu Zhong, Wen Zhong, Fei Fei Yin
Abstract:
Trinary affinity is a description of existence: every object exists as it is known and spoken of, in a system of 2 differences (denoted dif1, dif₂) and 1 similarity (Sim), equivalently expressed as dif₁ / Sim / dif₂ and kn / 0 / tkn (kn = the known, tkn = the 'to be known', 0 = the zero point of knowing). They are mathematically verified and illustrated in this paper by the arrangement of all integers onto 3 columns, where each number exists as a difference in relation to another number as another difference, and the 2 difs as arbitrated by a third number as the Sim, resulting in a trinary affinity or trinity of 3 numbers, of which one is the known, the other the 'to be known', and the third the zero (0) from which both the kn and tkn are measured and specified. Consequently, any number is horizontally specified either as 3n, or as '3n – 1' or '3n + 1', and vertically as 'Cn + c', so that any number seems to occur at the intersection of its X and Y axes and represented by its X and Y coordinates, as any point on Earth’s surface by its latitude and longitude. Technically, i) primes are viewed and treated as progenitors, and composites as descending from them, forming families of composites, each capable of being measured and specified from its own zero called in this paper the realistic zero (denoted 0r, as contrasted to the mathematic zero, 0m), which corresponds to the constant c, and the nature of which separates the composite and prime numbers, and ii) any number is considered as having a magnitude as well as a position, so that a number is verified as a prime first by referring to its descriptive formula and then by making sure that no composite number can possibly occur on its position, by dividing it with factors provided by the composite number formulas. The paper consists of 3 parts: 1) a brief explanation of the trinary affinity of things, 2) the 8 formulas that represent ALL the primes, and 3) families of composite numbers, each represented by a formula. A composite number family is described as 3n + f₁‧f₂. Since there are an infinitely large number of composite number families, to verify the primality of a great probable prime, we have to have it divided with several or many a f₁ from a range of composite number formulas, a procedure that is as laborious as it is the surest way to verifying a great number’s primality. (So, it is possible to substitute planned division for trial division.)Keywords: trinary affinity, difference, similarity, realistic zero
Procedia PDF Downloads 2114229 A Handheld Light Meter Device for Methamphetamine Detection in Oral Fluid
Authors: Anindita Sen
Abstract:
Oral fluid is a promising diagnostic matrix for drugs of abuse compared to urine and serum. Detection of methamphetamine in oral fluid would pave way for the easy evaluation of impairment in drivers during roadside drug testing as well as ensure safe working environments by facilitating evaluation of impairment in employees at workplaces. A membrane-based point-of-care (POC) friendly pre-treatment technique has been developed which aided elimination of interferences caused by salivary proteins and facilitated the demonstration of methamphetamine detection in saliva using a gold nanoparticle based colorimetric aptasensor platform. It was found that the colorimetric response in saliva was always suppressed owing to the matrix effects. By navigating the challenging interfering issues in saliva, we were successfully able to detect methamphetamine at nanomolar levels in saliva offering immense promise for the translation of these platforms for on-site diagnostic systems. This subsequently motivated the development of a handheld portable light meter device that can reliably transduce the aptasensors colorimetric response into absorbance, facilitating quantitative detection of analyte concentrations on-site. This is crucial due to the prevalent unreliability and sensitivity problems of the conventional drug testing kits. The fabricated light meter device response was validated against a standard UV-Vis spectrometer to confirm reliability. The portable and cost-effective handheld detector device features sensitivity comparable to the well-established UV-Vis benchtop instrument and the easy-to-use device could potentially serve as a prototype for a commercial device in the future.Keywords: aptasensors, colorimetric gold nanoparticle assay, point-of-care, oral fluid
Procedia PDF Downloads 614228 Flood Analysis of Domestic Rooftop Rainwater Harvesting in Low Lying Flood Plain Areas at Gomti Nagar In Rain-Dominated Monsoon Climates
Authors: Rajkumar Ghosh
Abstract:
Rapid urbanization, rising population, changing lifestyles and in-migration, Lucknow is groundwater over-exploited area, with an abstract rate of 1968 m3/day/km2 in Gomti Nagar. The groundwater situation in Gomti Nagar is deteriorating day-by-day. According to the work, the calculated annual water deficiency in Gomti Nagar area will be 28061 Million Litre (ML) in 2022. Within 30 yrs., the water deficiency will be 735570 ML (till 2051). The calculated groundwater recharge in Gomti Nagar was 10813 ML/y (in 2022). The annual groundwater abstraction from Gomti Nagar area was 35332 ML/yr. (in 2022). Bye-laws (≥ 300 sq.m) existing RTRWHs can recharge 17.71 ML/yr. in Gomti Nagar area. The existing RTRWHs are contributing 0.07% for recharging groundwater table. In Gomti Nagar, the water level is dropping at a rate of 1.0 metre per year, and the depth of the water table is less than 30 metre below ground level (mbgl). Natural groundwater recharge is affected by the geomorphological conditions of the surrounding area. Gomti Nagar is located on the erosional terrace (Te) and depositional terrace (d) of the Gomti River. The flood plain in Lucknow city is less active due to the embankments on the both sides of the Gomti River. The alluvium is composed of clay sandy up to a depth of 30m, and the alignment of the Gomti River reveals the presence of sandy soil at shallow depths. Aquifer depth 120 metre. Recharge as in Gomti Nagar (it may vary) 0 – 150 metre. Infiltration rates in alluvial floodplains range from 0.8 to 74 cm/hr. Geologically and Geomorphologically support rapid percolation of rainwater through alluvium in Gomti Nagar, Lucknow city, Uttar Pradesh. Over-exploitation of groundwater causes natural hazards viz. land subsidence, development of cracks on roads and buildings, development of vacuum and compactness of soil/clay which leads towards land subsidence, devastating effects on natural stream flow. Gomti River already transitioning phase from ‘effluent’ to ‘influent’, and saline intrusion in Aquifer –II (among Five aquifers in Lucknow city). A 250 m long crack developed in 2007 due to groundwater depletion in Dullu Khera and Vader Khera village of Kakori, Uttar Pradesh. The groundwater table of Lucknow is declining and water table imbalance occurs due to 17 times less recharge than groundwater exploitation. Uttar Pradesh along with four states have extracted 49% of groundwater in the entire country. In Gomti Nagar area, 27305 no of houses are present and available build up area 3.8 sq. km (60% of plot area) based on Lucknow Development Authority (LDA) Master plan 2031. If RTRWHs would install in all the houses, then 12% harvested rainwater contribute to the water table in Gomti Nagar area. Till 2051, Gomti Nagar area will harvest 91110 ML of rainwater. There are minimalistic chances that any incidence of flood can occur due to RTRWH. Thus, it can conclud that RTRWH is not related to flood happening in urban areas viz. Gomti Nagar.Keywords: RTRWH, aquifer, groundwater table, rainwater, infiltration
Procedia PDF Downloads 784227 Ferromagnetic Potts Models with Multi Site Interaction
Authors: Nir Schreiber, Reuven Cohen, Simi Haber
Abstract:
The Potts model has been widely explored in the literature for the last few decades. While many analytical and numerical results concern with the traditional two site interaction model in various geometries and dimensions, little is yet known about models where more than two spins simultaneously interact. We consider a ferromagnetic four site interaction Potts model on the square lattice (FFPS), where the four spins reside in the corners of an elementary square. Each spin can take an integer value 1,2,...,q. We write the partition function as a sum over clusters consisting of monochromatic faces. When the number of faces becomes large, tracing out spin configurations is equivalent to enumerating large lattice animals. It is known that the asymptotic number of animals with k faces is governed by λᵏ, with λ ≈ 4.0626. Based on this observation, systems with q < 4 and q > 4 exhibit a second and first order phase transitions, respectively. The transition nature of the q = 4 case is borderline. For any q, a critical giant component (GC) is formed. In the finite order case, GC is simple, while it is fractal when the transition is continuous. Using simple equilibrium arguments, we obtain a (zero order) bound on the transition point. It is claimed that this bound should apply for other lattices as well. Next, taking into account higher order sites contributions, the critical bound becomes tighter. Moreover, for q > 4, if corrections due to contributions from small clusters are negligible in the thermodynamic limit, the improved bound should be exact. The improved bound is used to relate the critical point to the finite correlation length. Our analytical predictions are confirmed by an extensive numerical study of FFPS, using the Wang-Landau method. In particular, the q=4 marginal case is supported by a very ambiguous pseudo-critical finite size behavior.Keywords: entropic sampling, lattice animals, phase transitions, Potts model
Procedia PDF Downloads 1604226 Blueprinting of a Normalized Supply Chain Processes: Results in Implementing Normalized Software Systems
Authors: Bassam Istanbouli
Abstract:
With the technology evolving every day and with the increase in global competition, industries are always under the pressure to be the best. They need to provide good quality products at competitive prices, when and how the customer wants them. In order to achieve this level of service, products and their respective supply chain processes need to be flexible and evolvable; otherwise changes will be extremely expensive, slow and with many combinatorial effects. Those combinatorial effects impact the whole organizational structure, from a management, financial, documentation, logistics and specially the information system Enterprise Requirement Planning (ERP) perspective. By applying the normalized system concept/theory to segments of the supply chain, we believe minimal effects, especially at the time of launching an organization global software project. The purpose of this paper is to point out that if an organization wants to develop a software from scratch or implement an existing ERP software for their business needs and if their business processes are normalized and modular then most probably this will yield to a normalized and modular software system that can be easily modified when the business evolves. Another important goal of this paper is to increase the awareness regarding the design of the business processes in a software implementation project. If the blueprints created are normalized then the software developers and configurators will use those modular blueprints to map them into modular software. This paper only prepares the ground for further studies; the above concept will be supported by going through the steps of developing, configuring and/or implementing a software system for an organization by using two methods: The Software Development Lifecycle method (SDLC) and the Accelerated SAP implementation method (ASAP). Both methods start with the customer requirements, then blue printing of its business processes and finally mapping those processes into a software system. Since those requirements and processes are the starting point of the implementation process, then normalizing those processes will end up in a normalizing software.Keywords: blueprint, ERP, modular, normalized
Procedia PDF Downloads 1394225 Changes in Geospatial Structure of Households in the Czech Republic: Findings from Population and Housing Census
Authors: Jaroslav Kraus
Abstract:
Spatial information about demographic processes are a standard part of outputs in the Czech Republic. That was also the case of Population and Housing Census which was held on 2011. This is a starting point for a follow up study devoted to two basic types of households: single person households and households of one completed family. Single person households and one family households create more than 80 percent of all households, but the share and spatial structure is in long-term changing. The increase of single households is results of long-term fertility decrease and divorce increase, but also possibility of separate living. There are regions in the Czech Republic with traditional demographic behavior, and regions like capital Prague and some others with changing pattern. Population census is based - according to international standards - on the concept of currently living population. Three types of geospatial approaches will be used for analysis: (i) firstly measures of geographic distribution, (ii) secondly mapping clusters to identify the locations of statistically significant hot spots, cold spots, spatial outliers, and similar features and (iii) finally analyzing pattern approach as a starting point for more in-depth analyses (geospatial regression) in the future will be also applied. For analysis of this type of data, number of households by types should be distinct objects. All events in a meaningful delimited study region (e.g. municipalities) will be included in an analysis. Commonly produced measures of central tendency and spread will include: identification of the location of the center of the point set (by NUTS3 level); identification of the median center and standard distance, weighted standard distance and standard deviational ellipses will be also used. Identifying that clustering exists in census households datasets does not provide a detailed picture of the nature and pattern of clustering but will be helpful to apply simple hot-spot (and cold spot) identification techniques to such datasets. Once the spatial structure of households will be determined, any particular measure of autocorrelation can be constructed by defining a way of measuring the difference between location attribute values. The most widely used measure is Moran’s I that will be applied to municipal units where numerical ratio is calculated. Local statistics arise naturally out of any of the methods for measuring spatial autocorrelation and will be applied to development of localized variants of almost any standard summary statistic. Local Moran’s I will give an indication of household data homogeneity and diversity on a municipal level.Keywords: census, geo-demography, households, the Czech Republic
Procedia PDF Downloads 964224 World Peace and Conflict Resolution: A Solution from a Buddhist Point of View
Authors: Samitharathana R. Wadigala
Abstract:
The peace will not be established until the self-consciousness would reveal in the human beings. In this nuclear age, the establishment of a lasting peace on the earth represents the primary condition for the preservation of human civilization and survival of human beings. Nothing perhaps is so important and indispensable as the achievement and maintenance of peace in the modern world today. Peace in today’s world implies much more than the mere absence of war and violence. In the interdependent world of today the United Nations needs to be representative of the modern world and democratic in its functioning because it came into existence to save the generations from the scourge of war and conflict. Buddhism is the religion of peaceful co-existence and philosophy of enlightenment. Violence and conflict from the perspective of the Buddhist theory of interdependent origination (Paṭiccasamuppāda) are same with everything else in the world a product of causes and conditions. Buddhism is totally compatible with the congenial and peaceful global order. The canonical literature, doctrines, and philosophy of Buddhism are the best suited for inter-faith dialogue, harmony, and universal peace. Even today Buddhism can resurrect the universal brotherhood, peaceful co-existence and harmonious surroundings in the comity of nations. With its increasing vitality in regions around the world, many people today turn to Buddhism for relief and guidance at the time when peace seems to be a deferred dream more than ever. From a Buddhist point of view the roots of all unwholesome actions (Conflict) i. e. greed, hatred and delusion are viewed as the root cause of all human conflicts. Conflict often emanates from attachment to material things: pleasures, property, territory, wealth, economic dominance or political superiority. Buddhism has some particularly rich resources for deployment in dissolving conflict. Buddhism addresses the Buddhist perspective on the causes of conflict and ways to resolve conflict to realize world peace. The world has enough to satisfy every body’s needs but not every body’s greed.Keywords: Buddhism, conflict-violence, peace, self-consciousness
Procedia PDF Downloads 2084223 Demand Forecasting to Reduce Dead Stock and Loss Sales: A Case Study of the Wholesale Electric Equipment and Part Company
Authors: Korpapa Srisamai, Pawee Siriruk
Abstract:
The purpose of this study is to forecast product demands and develop appropriate and adequate procurement plans to meet customer needs and reduce costs. When the product exceeds customer demands or does not move, it requires the company to support insufficient storage spaces. Moreover, some items, when stored for a long period of time, cause deterioration to dead stock. A case study of the wholesale company of electronic equipment and components, which has uncertain customer demands, is considered. The actual purchasing orders of customers are not equal to the forecast provided by the customers. In some cases, customers have higher product demands, resulting in the product being insufficient to meet the customer's needs. However, some customers have lower demands for products than estimates, causing insufficient storage spaces and dead stock. This study aims to reduce the loss of sales opportunities and the number of remaining goods in the warehouse, citing 30 product samples of the company's most popular products. The data were collected during the duration of the study from January to October 2022. The methods used to forecast are simple moving averages, weighted moving average, and exponential smoothing methods. The economic ordering quantity and reorder point are used to calculate to meet customer needs and track results. The research results are very beneficial to the company. The company can reduce the loss of sales opportunities by 20% so that the company has enough products to meet customer needs and can reduce unused products by up to 10% dead stock. This enables the company to order products more accurately, increasing profits and storage space.Keywords: demand forecast, reorder point, lost sale, dead stock
Procedia PDF Downloads 1214222 Interpreting Possibilities: Teaching Without Borders
Authors: Mira Kadric
Abstract:
The proposed paper deals with a new developed approach for interpreting teaching, combining traditional didactics with a new element. The fundamental principle of the approach is taken from the theatre pedagogy (Augusto Boal`s Theatre of the Oppressed) and includes the discussion on social power relations. From the point of view of education sociology this implies strengthening students’ individual potential for self-determination on a number of levels, especially in view of the present increase in social responsibility. This knowledge constitutes a starting point and basis for the process of self-determined action. This takes place in the context of a creative didactic policy which identifies didactic goals, provides clear sequences of content, specifies interdisciplinary methods and examines their practical adequacy and ultimately serves not only individual translators and interpreters, but all parties involved. The goal of the presented didactic model is to promote independent work and problem-solving strategies; this helps to develop creative potential and self-confident behaviour. It also conveys realistic knowledge of professional reality and thus also of the real socio-political and professional parameters involved. As well as providing a discussion of fundamental questions relevant to Translation and Interpreting Studies, this also serves to improve this interdisciplinary didactic approach which simulates interpreting reality and illustrates processes and strategies which (can) take place in real life. This idea is illustrated in more detail with methods taken from the Theatre of the Oppressed created by Augusto Boal. This includes examples from (dialogue) interpreting teaching based on documentation from recordings made in a seminar in the summer term 2014.Keywords: augusto boal, didactic model, interpreting teaching, theatre of the oppressed
Procedia PDF Downloads 4324221 A New Center of Motion in Cabling Robots
Authors: Alireza Abbasi Moshaii, Farshid Najafi
Abstract:
In this paper a new model for centre of motion creating is proposed. This new method uses cables. So, it is very useful in robots because it is light and has easy assembling process. In the robots which need to be in touch with some things this method is very good. It will be described in the following. The accuracy of the idea is proved by an experiment. This system could be used in the robots which need a fixed point in the contact with some things and make a circular motion. Such as dancer, physician or repair robots.Keywords: centre of motion, robotic cables, permanent touching, mechatronics engineering
Procedia PDF Downloads 4434220 Laboratory Scale Purification of Water from Copper Waste
Authors: Mumtaz Khan, Adeel Shahid, Waqas Khan
Abstract:
Heavy metals presence in water streams is a big danger for aquatic life and ultimately effects human health. Removal of copper (Cu) by ispaghula husk, maize fibre, and maize oil cake from synthetic solution in batch conditions was studied. Different experimental parameters such as contact time, initial solution pH, agitation rate, initial Cu concentration, biosorbent concentration, and biosorbent particle size has been studied to quantify the Cu biosorption. The rate of adsorption of metal ions was very fast at the beginning and became slow after reaching the saturation point, followed by a slower active metabolic uptake of metal ions into the cells. Up to a certain point, (pH=4, concentration of Cu = ~ 640 mg/l, agitation rate = ~ 400 rpm, biosorbent concentration = ~ 0.5g, 3g, 3g for ispaghula husk, maize fiber and maize oil cake, respectively) increasing the pH, concentration of Cu, agitation rate, and biosorbent concentration, increased the biosorption rate; however the sorption capacity increased by decreasing the particle size. At optimized experimental parameters, the maximum Cu biosorption by ispaghula husk, maize fibre and maize oil cake were 86.7%, 59.6% and 71.3%, respectively. Moreover, the results of the kinetics studies demonstrated that the biosorption of copper on ispaghula husk, maize fibre, and maize oil cake followed pseudo-second order kinetics. The results of adsorption were fitted to both the Langmuir and Freundlich models. The Langmuir model represented the sorption process better than Freundlich, and R² value ~ 0.978. Optimizations of physical and environmental parameters revealed, ispaghula husk as more potent copper biosorbent than maize fibre, and maize oil cake. The sorbent is cheap and available easily, so this study can be applied to remove Cu impurities on pilot and industrial scale after certain modifications.Keywords: biosorption, copper, ispaghula husk, maize fibre, maize oil cake, purification
Procedia PDF Downloads 4104219 The Evaluation for Interfacial Adhesion between SOFC and Metal Adhesive in the High Temperature Environment
Authors: Sang Koo Jeon, Seung Hoon Nahm, Oh Heon Kwon
Abstract:
The unit cell of solid oxide fuel cell (SOFC) must be stacked as several layers type to obtain the high power. The most of researcher have concerned about the performance of stacked SOFC rather than the structural stability of stacked SOFC and especially interested how to design for reducing the electrical loss and improving the high efficiency. Consequently, the stacked SOFC able to produce the electrical high power and related parts like as manifold, gas seal, bipolar plate were developed to optimize the stack design. However, the unit cell of SOFC was just layered on the interconnector without the adhesion and the hydrogen and oxygen were injected to the interfacial layer in the high temperature. On the operating condition, the interfacial layer can be the one of the weak point in the stacked SOFC. Therefore the evaluation of the structural safety for the failure is essentially needed. In this study, interfacial adhesion between SOFC and metal adhesive was estimated in the high temperature environment. The metal adhesive was used to strongly connect the unit cell of SOFC with interconnector and provide the electrical conductivity between them. The four point bending test was performed to measure the interfacial adhesion. The unit cell of SOFC and SiO2 wafer were diced and then attached by metal adhesive. The SiO2 wafer had the center notch to initiate a crack from the tip of the notch. The modified stereomicroscope combined with the CCD camera and system for measuring the length was used to observe the fracture behavior. Additionally, the interfacial adhesion was evaluated in the high temperature condition because the metal adhesive was affected by high temperature. Also the specimen was exposed in the furnace during several hours and then the interfacial adhesion was evaluated. Finally, the interfacial adhesion energy was quantitatively determined and compared in the each condition.Keywords: solid oxide fuel cell (SOFC), metal adhesive, adhesion, high temperature
Procedia PDF Downloads 5214218 Amblyopia and Eccentric Fixation
Authors: Kristine Kalnica-Dorosenko, Aiga Svede
Abstract:
Amblyopia or 'lazy eye' is impaired or dim vision without obvious defect or change in the eye. It is often associated with abnormal visual experience, most commonly strabismus, anisometropia or both, and form deprivation. The main task of amblyopia treatment is to ameliorate etiological factors to create a clear retinal image and, to ensure the participation of the amblyopic eye in the visual process. The treatment of amblyopia and eccentric fixation is usually associated with problems in the therapy. Eccentric fixation is present in around 44% of all patients with amblyopia and in 30% of patients with strabismic amblyopia. In Latvia, amblyopia is carefully treated in various clinics, but eccentricity diagnosis is relatively rare. Conflict which has developed relating to the relationship between the visual disorder and the degree of eccentric fixation in amblyopia should to be rethoughted, because it has an important bearing on the cause and treatment of amblyopia, and the role of the eccentric fixation in this case. Visuoscopy is the most frequently used method for determination of eccentric fixation. With traditional visuoscopy, a fixation target is projected onto the patient retina, and the examiner asks to look straight directly at the center of the target. An optometrist then observes the point on the macula used for fixation. This objective test provides clinicians with direct observation of the fixation point of the eye. It requires patients to voluntarily fixate the target and assumes the foveal reflex accurately demarcates the center of the foveal pit. In the end, by having a very simple method to evaluate fixation, it is possible to indirectly evaluate treatment improvement, as eccentric fixation is always associated with reduced visual acuity. So, one may expect that if eccentric fixation in amlyopic eye is found with visuoscopy, then visual acuity should be less than 1.0 (in decimal units). With occlusion or another amblyopia therapy, one would expect both visual acuity and fixation to improve simultaneously, that is fixation would become more central. Consequently, improvement in fixation pattern by treatment is an indirect measurement of improvement of visual acuity. Evaluation of eccentric fixation in the child may be helpful in identifying amblyopia in children prior to measurement of visual acuity. This is very important because the earlier amblyopia is diagnosed – the better the chance of improving visual acuity.Keywords: amblyopia, eccentric fixation, visual acuity, visuoscopy
Procedia PDF Downloads 1584217 Sensitivity Assessment of Spectral Salinity Indices over Desert Sabkha of Western UAE
Authors: Rubab Ammad, Abdelgadir Abuelgasim
Abstract:
UAE typically lies in one of the aridest regions of the world and is thus home to geologic features common to such climatic conditions including vast open deserts, sand dunes, saline soils, inland Sabkha and coastal Sabkha. Sabkha are characteristic salt flats formed in arid environment due to deposition and precipitation of salt and silt over sand surface because of low laying water table and rates of evaporation exceeding rates of precipitation. The study area, which comprises of western UAE, is heavily concentrated with inland Sabkha. Remote sensing is conventionally used to study the soil salinity of agriculturally degraded lands but not so broadly for Sabkha. The focus of this study was to identify these highly saline Sabkha areas on remotely sensed data, using salinity indices. The existing salinity indices in the literature have been designed for agricultural soils and they have not frequently used the spectral response of short-wave infra-red (SWIR1 and SWIR2) parts of electromagnetic spectrum. Using Landsat 8 OLI data and field ground truthing, this study formulated indices utilizing NIR-SWIR parts of spectrum and compared the results with existing salinity indices. Most indices depict reasonably good relationship between salinity and spectral index up until a certain value of salinity after which the reflectance reaches a saturation point. This saturation point varies with index. However, the study findings suggest a role of incorporating near infra-red and short-wave infra-red in salinity index with a potential of showing a positive relationship between salinity and reflectance up to a higher salinity value, compared to rest.Keywords: Sabkha, salinity index, saline soils, Landsat 8, SWIR1, SWIR2, UAE desert
Procedia PDF Downloads 2144216 The Psychology of Cross-Cultural Communication: A Socio-Linguistics Perspective
Authors: Tangyie Evani, Edmond Biloa, Emmanuel Nforbi, Lem Lilian Atanga, Kom Beatrice
Abstract:
The dynamics of languages in contact necessitates a close study of how its users negotiate meanings from shared values in the process of cross-cultural communication. A transverse analysis of the situation demonstrates the existence of complex efforts on connecting cultural knowledge to cross-linguistic competencies within a widening range of communicative exchanges. This paper sets to examine the psychology of cross-cultural communication in a multi-linguistic setting like Cameroon where many local and international languages are in close contact. The paper equally analyses the pertinence of existing macro sociological concepts as fundamental knowledge traits in literal and idiomatic cross semantic mapping. From this point, the article presents a path model of connecting sociolinguistics to the increasing adoption of a widening range of communicative genre piloted by the on-going globalisation trends with its high-speed information technology machinery. By applying a cross cultural analysis frame, the paper will be contributing to a better understanding of the fundamental changes in the nature and goals of cross-cultural knowledge in pragmatics of communication and cultural acceptability’s. It emphasises on the point that, in an era of increasing global interchange, a comprehensive inclusive global culture through bridging gaps in cross-cultural communication would have significant potentials to contribute to achieving global social development goals, if inadequacies in language constructs are adjusted to create avenues that intertwine with sociocultural beliefs, ensuring that meaningful and context bound sociolinguistic values are observed within the global arena of communication.Keywords: cross-cultural communication, customary language, literalisms, primary meaning, subclasses, transubstantiation
Procedia PDF Downloads 2854215 Dynamic and Thermal Characteristics of Three-Dimensional Turbulent Offset Jet
Authors: Ali Assoudi, Sabra Habli, Nejla Mahjoub Saïd, Philippe Bournot, Georges Le Palec
Abstract:
Studying the flow characteristics of a turbulent offset jet is an important topic among researchers across the world because of its various engineering applications. Some of the common examples include: injection and carburetor systems, entrainment and mixing process in gas turbine and boiler combustion chambers, Thrust-augmenting ejectors for V/STOL aircrafts and HVAC systems, environmental dischargers, film cooling and many others. An offset jet is formed when a jet discharges into a medium above a horizontal solid wall parallel to the axis of the jet exit but which is offset by a certain distance. The structure of a turbulent offset-jet can be described by three main regions. Close to the nozzle exit, an offset jet possesses characteristic features similar to those of free jets. Then, the entrainment of fluid between the jet, the offset wall and the bottom wall creates a low pressure zone, forcing the jet to deflect towards the wall and eventually attaches to it at the impingement point. This is referred to as the Coanda effect. Further downstream after the reattachment point, the offset jet has the characteristics of a wall jet flow. Therefore, the offset jet has characteristics of free, impingement and wall jets, and it is relatively more complex compared to these types of flows. The present study examines the dynamic and thermal evolution of a 3D turbulent offset jet with different offset height ratio (the ratio of the distance from the jet exit to the impingement bottom wall and the jet nozzle diameter). To achieve this purpose a numerical study was conducted to investigate a three-dimensional offset jet flow through the resolution of the different governing Navier–Stokes’ equations by means of the finite volume method and the RSM second-order turbulent closure model. A detailed discussion has been provided on the flow and thermal characteristics in the form of streamlines, mean velocity vector, pressure field and Reynolds stresses.Keywords: offset jet, offset ratio, numerical simulation, RSM
Procedia PDF Downloads 3044214 Comparison of Cervical Length Using Transvaginal Ultrasonography and Bishop Score to Predict Succesful Induction
Authors: Lubena Achmad, Herman Kristanto, Julian Dewantiningrum
Abstract:
Background: The Bishop score is a standard method used to predict the success of induction. This examination tends to be subjective with high inter and intraobserver variability, so it was presumed to have a low predictive value in terms of the outcome of labor induction. Cervical length measurement using transvaginal ultrasound is considered to be more objective to assess the cervical length. Meanwhile, this examination is not a complicated procedure and less invasive than vaginal touché. Objective: To compare transvaginal ultrasound and Bishop score in predicting successful induction. Methods: This study was a prospective cohort study. One hundred and twenty women with singleton pregnancies undergoing induction of labor at 37 – 42 weeks and met inclusion and exclusion criteria were enrolled in this study. Cervical assessment by both transvaginal ultrasound and Bishop score were conducted prior induction. The success of labor induction was defined as an ability to achieve active phase ≤ 12 hours after induction. To figure out the best cut-off point of cervical length and Bishop score, receiver operating characteristic (ROC) curves were plotted. Logistic regression analysis was used to determine which factors best-predicted induction success. Results: This study showed significant differences in terms of age, premature rupture of the membrane, the Bishop score, cervical length and funneling as significant predictors of successful induction. Using ROC curves found that the best cut-off point for prediction of successful induction was 25.45 mm for cervical length and 3 for Bishop score. Logistic regression was performed and showed only premature rupture of membranes and cervical length ≤ 25.45 that significantly predicted the success of labor induction. By excluding premature rupture of the membrane as the indication of induction, cervical length less than 25.3 mm was a better predictor of successful induction. Conclusion: Compared to Bishop score, cervical length using transvaginal ultrasound was a better predictor of successful induction.Keywords: Bishop Score, cervical length, induction, successful induction, transvaginal sonography
Procedia PDF Downloads 3254213 Bioinformatics Approach to Support Genetic Research in Autism in Mali
Authors: M. Kouyate, M. Sangare, S. Samake, S. Keita, H. G. Kim, D. H. Geschwind
Abstract:
Background & Objectives: Human genetic studies can be expensive, even unaffordable, in developing countries, partly due to the sequencing costs. Our aim is to pilot the use of bioinformatics tools to guide scientifically valid, locally relevant, and economically sound autism genetic research in Mali. Methods: The following databases, NCBI, HGMD, and LSDB, were used to identify hot point mutations. Phenotype, transmission pattern, theoretical protein expression in the brain, the impact of the mutation on the 3D structure of the protein) were used to prioritize selected autism genes. We used the protein database, Modeller, and clustal W. Results: We found Mef2c (Gly27Ala/Leu38Gln), Pten (Thr131IIle), Prodh (Leu289Met), Nme1 (Ser120Gly), and Dhcr7 (Pro227Thr/Glu224Lys). These mutations were associated with endonucleases BseRI, NspI, PfrJS2IV, BspGI, BsaBI, and SpoDI, respectively. Gly27Ala/Leu38Gln mutations impacted the 3D structure of the Mef2c protein. Mef2c protein sequences across species showed a high percentage of similarity with a highly conserved MADS domain. Discussion: Mef2c, Pten, Prodh, Nme1, and Dhcr 7 gene mutation frequencies in the Malian population will be very informative. PCR coupled with restriction enzyme digestion can be used to screen the targeted gene mutations. Sanger sequencing will be used for confirmation only. This will cut down considerably the sequencing cost for gene-to-gene mutation screening. The knowledge of the 3D structure and potential impact of the mutations on Mef2c protein informed the protein family and altered function (ex. Leu38Gln). Conclusion & Future Work: Bio-informatics will positively impact autism research in Mali. Our approach can be applied to another neuropsychiatric disorder.Keywords: bioinformatics, endonucleases, autism, Sanger sequencing, point mutations
Procedia PDF Downloads 834212 Study on Co-Relation of Prostate Specific Antigen with Metastatic Bone Disease in Prostate Cancer on Skeletal Scintigraphy
Authors: Muhammad Waleed Asfandyar, Akhtar Ahmed, Syed Adib-ul-Hasan Rizvi
Abstract:
Objective: To evaluate the ability of serum concentration of prostate specific antigen between two cutting points considering it as a predictor of skeletal metastasis on bone scintigraphy in men with prostate cancer. Settings: This study was carried out in department of Nuclear Medicine at Sindh Institute of Urology and Transplantation (SIUT) Karachi, Pakistan. Materials and Method: From August 2013 to November 2013, forty two (42) consecutive patients with prostate cancer who underwent technetium-99m methylene diphosphonate (Tc-99mMDP) whole body bone scintigraphy were prospectively analyzed. The information was collected from the scintigraphic database at a Nuclear medicine department Sindh institute of urology and transplantation Karachi Pakistan. Patients who did not have a serum PSA concentration available within 1 month before or after the time of performing the Tc-99m MDP whole body bone scintigraphy were excluded from this study. A whole body bone scintigraphy scan (from the toes to top of the head) was performed using a whole-body Moving gamma camera technique (anterior and posterior) 2–4 hours after intravenous injection of 20 mCi of Tc-99m MDP. In addition, all patients necessarily have a pathological report available. Bony metastases were determined from the bone scan studies and no further correlation with histopathology or other imaging modalities were performed. To preserve patient confidentiality, direct patient identifiers were not collected. In all the patients, Prostate specific antigen values and skeletal scintigraphy were evaluated. Results: The mean age, mean PSA, and incidence of bone metastasis on bone scintigraphy were 68.35 years, 370.51 ng/mL and 19/42 (45.23%) respectively. According to PSA levels, patients were divided into 5 groups < 10ng/mL (10/42), 10-20 ng/mL (5/42), 20-50 ng/mL (2/42), 50-100 (3/42), 100- 500ng/mL (3/42) and more than 500ng/mL (0/42) presenting negative bone scan. The incidence of positive bone scan (%) for bone metastasis for each group were O1 patient (5.26%), 0%, 03 patients (15.78%), 01 patient (5.26%), 04 patients (21.05%), and 10 patients (52.63%) respectively. From the 42 patients 19 (45.23%) presented positive scintigraphic examination for the presence of bone metastasis. 1 patient presented bone metastasis on bone scintigraphy having PSA level less than 10ng/mL, and in only 1 patient (5.26%) with bone metastasis PSA concentration was less than 20 ng/mL. therefore, when the cutting point adopted for PSA serum concentration was 10ng/mL, a negative predictive value for bone metastasis was 95% with sensitivity rates 94.74% and the positive predictive value and specificities of the method were 56.53% and 43.48% respectively. When the cutting point of PSA serum concentration was 20ng/mL the observed results for Positive predictive value and specificity were (78.27% and 65.22% respectively) whereas negative predictive value and sensitivity stood (100% and 95%) respectively. Conclusion: Results of our study allow us to conclude that serum PSA concentration of higher than 20ng/mL was the most accurate cutting point than a serum concentration of PSA higher than 10ng/mL to predict metastasis in radionuclide bone scintigraphy. In this way, unnecessary cost can be avoided, since a considerable part of prostate adenocarcinomas present low serum PSA levels less than 20 ng/mL and for these cases radionuclide bone scintigraphy could be unnecessary.Keywords: bone scan, cut off value, prostate specific antigen value, scintigraphy
Procedia PDF Downloads 3194211 An Exponential Field Path Planning Method for Mobile Robots Integrated with Visual Perception
Authors: Magdy Roman, Mostafa Shoeib, Mostafa Rostom
Abstract:
Global vision, whether provided by overhead fixed cameras, on-board aerial vehicle cameras, or satellite images can always provide detailed information on the environment around mobile robots. In this paper, an intelligent vision-based method of path planning and obstacle avoidance for mobile robots is presented. The method integrates visual perception with a new proposed field-based path-planning method to overcome common path-planning problems such as local minima, unreachable destination and unnecessary lengthy paths around obstacles. The method proposes an exponential angle deviation field around each obstacle that affects the orientation of a close robot. As the robot directs toward, the goal point obstacles are classified into right and left groups, and a deviation angle is exponentially added or subtracted to the orientation of the robot. Exponential field parameters are chosen based on Lyapunov stability criterion to guarantee robot convergence to the destination. The proposed method uses obstacles' shape and location, extracted from global vision system, through a collision prediction mechanism to decide whether to activate or deactivate obstacles field. In addition, a search mechanism is developed in case of robot or goal point is trapped among obstacles to find suitable exit or entrance. The proposed algorithm is validated both in simulation and through experiments. The algorithm shows effectiveness in obstacles' avoidance and destination convergence, overcoming common path planning problems found in classical methods.Keywords: path planning, collision avoidance, convergence, computer vision, mobile robots
Procedia PDF Downloads 1944210 Learning from Dendrites: Improving the Point Neuron Model
Authors: Alexander Vandesompele, Joni Dambre
Abstract:
The diversity in dendritic arborization, as first illustrated by Santiago Ramon y Cajal, has always suggested a role for dendrites in the functionality of neurons. In the past decades, thanks to new recording techniques and optical stimulation methods, it has become clear that dendrites are not merely passive electrical components. They are observed to integrate inputs in a non-linear fashion and actively participate in computations. Regardless, in simulations of neural networks dendritic structure and functionality are often overlooked. Especially in a machine learning context, when designing artificial neural networks, point neuron models such as the leaky-integrate-and-fire (LIF) model are dominant. These models mimic the integration of inputs at the neuron soma, and ignore the existence of dendrites. In this work, the LIF point neuron model is extended with a simple form of dendritic computation. This gives the LIF neuron increased capacity to discriminate spatiotemporal input sequences, a dendritic functionality as observed in another study. Simulations of the spiking neurons are performed using the Bindsnet framework. In the common LIF model, incoming synapses are independent. Here, we introduce a dependency between incoming synapses such that the post-synaptic impact of a spike is not only determined by the weight of the synapse, but also by the activity of other synapses. This is a form of short term plasticity where synapses are potentiated or depressed by the preceding activity of neighbouring synapses. This is a straightforward way to prevent inputs from simply summing linearly at the soma. To implement this, each pair of synapses on a neuron is assigned a variable,representing the synaptic relation. This variable determines the magnitude ofthe short term plasticity. These variables can be chosen randomly or, more interestingly, can be learned using a form of Hebbian learning. We use Spike-Time-Dependent-Plasticity (STDP), commonly used to learn synaptic strength magnitudes. If all neurons in a layer receive the same input, they tend to learn the same through STDP. Adding inhibitory connections between the neurons creates a winner-take-all (WTA) network. This causes the different neurons to learn different input sequences. To illustrate the impact of the proposed dendritic mechanism, even without learning, we attach five input neurons to two output neurons. One output neuron isa regular LIF neuron, the other output neuron is a LIF neuron with dendritic relationships. Then, the five input neurons are allowed to fire in a particular order. The membrane potentials are reset and subsequently the five input neurons are fired in the reversed order. As the regular LIF neuron linearly integrates its inputs at the soma, the membrane potential response to both sequences is similar in magnitude. In the other output neuron, due to the dendritic mechanism, the membrane potential response is different for both sequences. Hence, the dendritic mechanism improves the neuron’s capacity for discriminating spa-tiotemporal sequences. Dendritic computations improve LIF neurons even if the relationships between synapses are established randomly. Ideally however, a learning rule is used to improve the dendritic relationships based on input data. It is possible to learn synaptic strength with STDP, to make a neuron more sensitive to its input. Similarly, it is possible to learn dendritic relationships with STDP, to make the neuron more sensitive to spatiotemporal input sequences. Feeding structured data to a WTA network with dendritic computation leads to a significantly higher number of discriminated input patterns. Without the dendritic computation, output neurons are less specific and may, for instance, be activated by a sequence in reverse order.Keywords: dendritic computation, spiking neural networks, point neuron model
Procedia PDF Downloads 1334209 Predicting Match Outcomes in Team Sport via Machine Learning: Evidence from National Basketball Association
Authors: Jacky Liu
Abstract:
This paper develops a team sports outcome prediction system with potential for wide-ranging applications across various disciplines. Despite significant advancements in predictive analytics, existing studies in sports outcome predictions possess considerable limitations, including insufficient feature engineering and underutilization of advanced machine learning techniques, among others. To address these issues, we extend the Sports Cross Industry Standard Process for Data Mining (SRP-CRISP-DM) framework and propose a unique, comprehensive predictive system, using National Basketball Association (NBA) data as an example to test this extended framework. Our approach follows a holistic methodology in feature engineering, employing both Time Series and Non-Time Series Data, as well as conducting Explanatory Data Analysis and Feature Selection. Furthermore, we contribute to the discourse on target variable choice in team sports outcome prediction, asserting that point spread prediction yields higher profits as opposed to game-winner predictions. Using machine learning algorithms, particularly XGBoost, results in a significant improvement in predictive accuracy of team sports outcomes. Applied to point spread betting strategies, it offers an astounding annual return of approximately 900% on an initial investment of $100. Our findings not only contribute to academic literature, but have critical practical implications for sports betting. Our study advances the understanding of team sports outcome prediction a burgeoning are in complex system predictions and pave the way for potential profitability and more informed decision making in sports betting markets.Keywords: machine learning, team sports, game outcome prediction, sports betting, profits simulation
Procedia PDF Downloads 102