Search results for: point absorber
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5061

Search results for: point absorber

4221 Software Architecture Implications on Development Productivity: A Case of Malawi Point of Care Electronic Medical Records

Authors: Emmanuel Mkambankhani, Tiwonge Manda

Abstract:

Software platform architecture includes system components, their relationships, and design, as well as evolution principles. Software architecture and documentation affect a platform's customizability and openness to external innovators, thus affecting developer productivity. Malawi Point of Care (POC) Electronic Medical Records System (EMRS) follows some architectural design standards, but it lacks third-party innovators and is difficult to customize as compared to CommCare and District Health Information System 2 (DHIS2). Improving software architecture and documentation for the Malawi POC will increase productivity and third-party contributions. A conceptual framework based on Generativity and Boundary Resource Model (BRM) was used to compare the three platforms. Interviews, observations, and document analysis were used to collect primary and secondary data. Themes were found by analyzing qualitative and quantitative data, which led to the following results. Configurable, flexible, and cross-platform software platforms and the availability of interfaces (Boundary Resources) that let internal and external developers interact with the platform's core functionality, hence boosting developer productivity. Furthermore, documentation increases developer productivity, while its absence inhibits the use of resources. The study suggests that the architecture and openness of the Malawi POC EMR software platform will be improved by standardizing web application program interfaces (APIs) and making interfaces that can be changed by the user. In addition, increasing the availability of documentation and training will improve the use of boundary resources, thus improving internal and third-party development productivity.

Keywords: health systems, configurable platforms, software architecture, software documentation, software development productivity

Procedia PDF Downloads 84
4220 Discriminant Shooting-Related Statistics between Winners and Losers 2023 FIBA U19 Basketball World Cup

Authors: Navid Ebrahmi Madiseh, Sina Esfandiarpour-Broujeni, Rahil Razeghi

Abstract:

Introduction: Quantitative analysis of game-related statistical parameters is widely used to evaluate basketball performance at both individual and team levels. Non-free throw shooting plays a crucial role as the primary scoring method, holding significant importance in the game's technical aspect. It has been explored the predictive value of game-related statistics in relation to various contextual and situational variables. Many similarities and differences also have been found between different age groups and levels of competition. For instance, in the World Basketball Championships after the 2010 rule change, 2-point field goals distinguished winners from losers in women's games but not in men's games, and the impact of successful 3-point field goals on women's games was minimal. The study aimed to identify and compare discriminant shooting-related statistics between winning and losing teams in men’s and women’s FIBA-U19-Basketball-World-Cup-2023 tournaments. Method: Data from 112 observations (2 per game) of 16 teams (for each gender) in the FIBA-U19-Basketball-World-Cup-2023 were selected as samples. The data were obtained from the official FIBA website using Python. Specific information was extracted, organized into a DataFrame, and consisted of twelve variables, including shooting percentages, attempts, and scoring ratio for 3-pointers, mid-range shots, paint shots, and free throws. Made% = scoring type successful attempts/scoring type total attempts¬ (1)Free-throw-pts% (free throw score ratio) = (free throw score/total score) ×100 (2)Mid-pts% (mid-range score ratio) = (mid-range score/total score) ×100 (3) Paint-pts% (paint score ratio) = (Paint score/total score) ×100 (4) 3p_pts% (three-point score ratio) = (three-point score/total score) ×100 (5) Independent t-tests were used to examine significant differences in shooting-related statistical parameters between winning and losing teams for both genders. Statistical significance was p < 0.05. All statistical analyses were completed with SPSS, Version 18. Results: The results showed that 3p-made%, mid-pts%, paint-made%, paint-pts%, mid-attempts, and paint-attempts were significantly different between winners and losers in men (t=-3.465, P<0.05; t=3.681, P<0.05; t=-5.884, P<0.05; t=-3.007, P<0.05; t=2.549, p<0.05; t=-3.921, P<0.05). For women, significant differences between winners and losers were found for 3p-made%, 3p-pts%, paint-made%, and paint-attempt (t=-6.429, P<0.05; t=-1.993, P<0.05; t=-1.993, P<0.05; t=-4.115, P<0.05; t=02.451, P<0.05). Discussion: The research aimed to compare shooting-related statistics between winners and losers in men's and women's teams at the FIBA-U19-Basketball-World-Cup-2023. Results indicated that men's winners excelled in 3p-made%, paint-made%, paint-pts%, paint-attempts, and mid-attempt, consistent with previous studies. This study found that losers in men’s teams had higher mid-pts% than winners, which was inconsistent with previous findings. It has been indicated that winners tend to prioritize statistically efficient shots while forcing the opponent to take mid-range shots. In women's games, significant differences in 3p-made%, 3p-pts%, paint-made%, and paint-attempts were observed, indicating that winners relied on riskier outside scoring strategies. Overall, winners exhibited higher accuracy in paint and 3P shooting than losers, but they also relied more on outside offensive strategies. Additionally, winners acquired a higher ratio of their points from 3P shots, which demonstrates their confidence in their skills and willingness to take risks at this competitive level.

Keywords: gender, losers, shoot-statistic, U19, winners

Procedia PDF Downloads 95
4219 Construction and Optimization of Green Infrastructure Network in Mountainous Counties Based on Morphological Spatial Pattern Analysis and Minimum Cumulative Resistance Models: A Case Study of Shapingba District, Chongqing

Authors: Yuning Guan

Abstract:

Under the background of rapid urbanization, mountainous counties need to break through mountain barriers for urban expansion due to undulating topography, resulting in ecological problems such as landscape fragmentation and reduced biodiversity. Green infrastructure networks are constructed to alleviate the contradiction between urban expansion and ecological protection, promoting the healthy and sustainable development of urban ecosystems. This study applies the MSPA model, the MCR model and Linkage Mapper Tools to identify eco-sources and eco-corridors in the Shapingba District of Chongqing and combined with landscape connectivity assessment and circuit theory to delineate the importance levels to extract ecological pinch point areas on the corridors. The results show that: (1) 20 ecological sources are identified, with a total area of 126.47 km², accounting for 31.88% of the study area, and showing a pattern of ‘one core, three corridors, multi-point distribution’. (2) 37 ecological corridors are formed in the area, with a total length of 62.52km, with a ‘more in the west, less in the east’ pattern. (3) 42 ecological pinch points are extracted, accounting for 25.85% of the length of the corridors, which are mainly distributed in the eastern new area. Accordingly, this study proposes optimization strategies for sub-area protection of ecological sources, grade-level construction of ecological corridors, and precise restoration of ecological pinch points.

Keywords: green infrastructure network, morphological spatial pattern, minimal cumulative resistance, mountainous counties, circuit theory, shapingba district

Procedia PDF Downloads 40
4218 Analysis of Senior Secondary II Students Performance/Approaches Exhibited in Solving Circle Geometry

Authors: Mukhtari Hussaini Muhammad, Abba Adamu

Abstract:

The paper will examine the approaches and solutions that will be offered by Senior Secondary School II Students (Demonstration Secondary School, Azare Bauchi State Northern Nigeria – Hausa/ Fulani predominant area) toward solving exercises related to the circle theorem. The angle that an arc of a circle subtends at the center is twice that which it subtends at any point on the remaining part of the circumference. The Students will be divided in to 2 groups by given them numbers 1, 2; 1, 2; 1, 2, then all 1s formed group I and all 2s formed group II. Group I will be considered as control group in which the traditional method will be applied during instructions. Thus, the researcher will revise the concept of circle, state the theorem, prove the theorem and then solve examples. Group II, experimental group in which the concept of circle will be revised to the students and then the students will be asked to draw different circles, mark arcs, draw angle at the center, angle at the circumference then measure the angles constructed. The students will be asked to explain what they can infer/deduce from the angles measured and lastly, examples will be solved. During the next contact day, both groups will be subjected to solving exercises in the classroom related to the theorem. The angle that an arc of a circle subtends at the center is twice that which it subtends at any point on the remaining part of circumference. The solution to the exercises will be marked, the scores compared/analysed using relevant statistical tool. It is expected that group II will perform better because of the method/ technique followed during instructions is more learner-centered. By exploiting the talents of the individual learners through listening to the views and asking them how they arrived at a solution will really improve learning and understanding.

Keywords: circle theorem, control group, experimental group, traditional method

Procedia PDF Downloads 188
4217 ChaQra: A Cellular Unit of the Indian Quantum Network

Authors: Shashank Gupta, Iteash Agarwal, Vijayalaxmi Mogiligidda, Rajesh Kumar Krishnan, Sruthi Chennuri, Deepika Aggarwal, Anwesha Hoodati, Sheroy Cooper, Ranjan, Mohammad Bilal Sheik, Bhavya K. M., Manasa Hegde, M. Naveen Krishna, Amit Kumar Chauhan, Mallikarjun Korrapati, Sumit Singh, J. B. Singh, Sunil Sud, Sunil Gupta, Sidhartha Pant, Sankar, Neha Agrawal, Ashish Ranjan, Piyush Mohapatra, Roopak T., Arsh Ahmad, Nanjunda M., Dilip Singh

Abstract:

Major research interests on quantum key distribution (QKD) are primarily focussed on increasing 1. point-to-point transmission distance (1000 Km), 2. secure key rate (Mbps), 3. security of quantum layer (device-independence). It is great to push the boundaries on these fronts, but these isolated approaches are neither scalable nor cost-effective due to the requirements of specialised hardware and different infrastructure. Current and future QKD network requires addressing different sets of challenges apart from distance, key rate, and quantum security. In this regard, we present ChaQra -a sub-quantum network with core features as 1) Crypto agility (integration in the already deployed telecommunication fibres), 2) Software defined networking (SDN paradigm for routing different nodes), 3) reliability (addressing denial-of-service with hybrid quantum safe cryptography), 4) upgradability (modules upgradation based on scientific and technological advancements), 5) Beyond QKD (using QKD network for distributed computing, multi-party computation etc). Our results demonstrate a clear path to create and accelerate quantum secure Indian subcontinent under the national quantum mission.

Keywords: quantum network, quantum key distribution, quantum security, quantum information

Procedia PDF Downloads 51
4216 Thermal Image Segmentation Method for Stratification of Freezing Temperatures

Authors: Azam Fazelpour, Saeed R. Dehghani, Vlastimil Masek, Yuri S. Muzychka

Abstract:

The study uses an image analysis technique employing thermal imaging to measure the percentage of areas with various temperatures on a freezing surface. An image segmentation method using threshold values is applied to a sequence of image recording the freezing process. The phenomenon is transient and temperatures vary fast to reach the freezing point and complete the freezing process. Freezing salt water is subjected to the salt rejection that makes the freezing point dynamic and dependent on the salinity at the phase interface. For a specific area of freezing, nucleation starts from one side and end to another side, which causes a dynamic and transient temperature in that area. Thermal cameras are able to reveal a difference in temperature due to their sensitivity to infrared radiance. Using Experimental setup, a video is recorded by a thermal camera to monitor radiance and temperatures during the freezing process. Image processing techniques are applied to all frames to detect and classify temperatures on the surface. Image processing segmentation method is used to find contours with same temperatures on the icing surface. Each segment is obtained using the temperature range appeared in the image and correspond pixel values in the image. Using the contours extracted from image and camera parameters, stratified areas with different temperatures are calculated. To observe temperature contours on the icing surface using the thermal camera, the salt water sample is dropped on a cold surface with the temperature of -20°C. A thermal video is recorded for 2 minutes to observe the temperature field. Examining the results obtained by the method and the experimental observations verifies the accuracy and applicability of the method.

Keywords: ice contour boundary, image processing, image segmentation, salt ice, thermal image

Procedia PDF Downloads 314
4215 The Capabilities of New Communication Devices in Development of Informing: Case Study Mobile Functions in Iran

Authors: Mohsen Shakerinejad

Abstract:

Due to the growing momentum of technology, the present age is called age of communication and information. And With Astounding progress of Communication and information tools, current world Is likened to the "global village". That a message can be sent from one point to another point of the world in a Time scale Less than a minute. However, one of the new sociologists -Alain Touraine- in describing the destructive effects of new changes arising from the development of information appliances refers to the "new fields for undemocratic social control And the incidence of acute and unrest social and political tensions", Yet, in this era That With the advancement of the industry, the life of people has been industrial too, quickly and accurately Data Transfer, Causes Blowing new life in the Body of Society And according to the features of each society and the progress of science and technology, Various tools should be used. One of these communication tools is Mobile. Cellular phone As Communication and telecommunication revolution in recent years, Has had a great influence on the individual and collective life of societies. This powerful communication tool Have had an Undeniable effect, On all aspects of life, including social, economic, cultural, scientific, etc. so that Ignoring It in Design, Implementation and enforcement of any system is not wise. Nowadays knowledge and information are one of the most important aspects of human life. Therefore, in this article, it has been tried to introduce mobile potentials in receive and transmit News and Information. As it follows, among the numerous capabilities of current mobile phones features such as sending text, photography, sound recording, filming, and Internet connectivity could indicate the potential of this medium of communication in the process of sending and receiving information. So that nowadays, mobile journalism as an important component of citizen journalism Has a unique role in information dissemination.

Keywords: mobile, informing, receiving information, mobile journalism, citizen journalism

Procedia PDF Downloads 406
4214 Application of Single Tuned Passive Filters in Distribution Networks at the Point of Common Coupling

Authors: M. Almutairi, S. Hadjiloucas

Abstract:

The harmonic distortion of voltage is important in relation to power quality due to the interaction between the large diffusion of non-linear and time-varying single-phase and three-phase loads with power supply systems. However, harmonic distortion levels can be reduced by improving the design of polluting loads or by applying arrangements and adding filters. The application of passive filters is an effective solution that can be used to achieve harmonic mitigation mainly because filters offer high efficiency, simplicity, and are economical. Additionally, possible different frequency response characteristics can work to achieve certain required harmonic filtering targets. With these ideas in mind, the objective of this paper is to determine what size single tuned passive filters work in distribution networks best, in order to economically limit violations caused at a given point of common coupling (PCC). This article suggests that a single tuned passive filter could be employed in typical industrial power systems. Furthermore, constrained optimization can be used to find the optimal sizing of the passive filter in order to reduce both harmonic voltage and harmonic currents in the power system to an acceptable level, and, thus, improve the load power factor. The optimization technique works to minimize voltage total harmonic distortions (VTHD) and current total harmonic distortions (ITHD), where maintaining a given power factor at a specified range is desired. According to the IEEE Standard 519, both indices are viewed as constraints for the optimal passive filter design problem. The performance of this technique will be discussed using numerical examples taken from previous publications.

Keywords: harmonics, passive filter, power factor, power quality

Procedia PDF Downloads 304
4213 Competitors’ Influence Analysis of a Retailer by Using Customer Value and Huff’s Gravity Model

Authors: Yepeng Cheng, Yasuhiko Morimoto

Abstract:

Customer relationship analysis is vital for retail stores, especially for supermarkets. The point of sale (POS) systems make it possible to record the daily purchasing behaviors of customers as an identification point of sale (ID-POS) database, which can be used to analyze customer behaviors of a supermarket. The customer value is an indicator based on ID-POS database for detecting the customer loyalty of a store. In general, there are many supermarkets in a city, and other nearby competitor supermarkets significantly affect the customer value of customers of a supermarket. However, it is impossible to get detailed ID-POS databases of competitor supermarkets. This study firstly focused on the customer value and distance between a customer's home and supermarkets in a city, and then constructed the models based on logistic regression analysis to analyze correlations between distance and purchasing behaviors only from a POS database of a supermarket chain. During the modeling process, there are three primary problems existed, including the incomparable problem of customer values, the multicollinearity problem among customer value and distance data, and the number of valid partial regression coefficients. The improved customer value, Huff’s gravity model, and inverse attractiveness frequency are considered to solve these problems. This paper presents three types of models based on these three methods for loyal customer classification and competitors’ influence analysis. In numerical experiments, all types of models are useful for loyal customer classification. The type of model, including all three methods, is the most superior one for evaluating the influence of the other nearby supermarkets on customers' purchasing of a supermarket chain from the viewpoint of valid partial regression coefficients and accuracy.

Keywords: customer value, Huff's Gravity Model, POS, Retailer

Procedia PDF Downloads 118
4212 Imaging 255nm Tungsten Thin Film Adhesion with Picosecond Ultrasonics

Authors: A. Abbas, X. Tridon, J. Michelon

Abstract:

In the electronic or in the photovoltaic industries, components are made from wafers which are stacks of thin film layers of a few nanometers to serval micrometers thickness. Early evaluation of the bounding quality between different layers of a wafer is one of the challenges of these industries to avoid dysfunction of their final products. Traditional pump-probe experiments, which have been developed in the 70’s, give a partial solution to this problematic but with a non-negligible drawback. In fact, on one hand, these setups can generate and detect ultra-high ultrasounds frequencies which can be used to evaluate the adhesion quality of wafer layers. But, on the other hand, because of the quiet long acquisition time they need to perform one measurement, these setups remain shut in punctual measurement to evaluate global sample quality. This last point can lead to bad interpretation of the sample quality parameters, especially in the case of inhomogeneous samples. Asynchronous Optical Sampling (ASOPS) systems can perform sample characterization with picosecond acoustics up to 106 times faster than traditional pump-probe setups. This last point allows picosecond ultrasonic to unlock the acoustic imaging field at the nanometric scale to detect inhomogeneities regarding sample mechanical properties. This fact will be illustrated by presenting an image of the measured acoustical reflection coefficients obtained by mapping, with an ASOPS setup, a 255nm thin-film tungsten layer deposited on a silicone substrate. Interpretation of the coefficient reflection in terms of bounding quality adhesion will also be exposed. Origin of zones which exhibit good and bad quality bounding will be discussed.

Keywords: adhesion, picosecond ultrasonics, pump-probe, thin film

Procedia PDF Downloads 156
4211 Detection of the Effectiveness of Training Courses and Their Limitations Using CIPP Model (Case Study: Isfahan Oil Refinery)

Authors: Neda Zamani

Abstract:

The present study aimed to investigate the effectiveness of training courses and their limitations using the CIPP model. The investigations were done on Isfahan Refinery as a case study. From a purpose point of view, the present paper is included among applied research and from a data gathering point of view, it is included among descriptive research of the field type survey. The population of the study included participants in training courses, their supervisors and experts of the training department. Probability-proportional-to-size (PPS) was used as the sampling method. The sample size for participants in training courses included 195 individuals, 30 supervisors and 11 individuals from the training experts’ group. To collect data, a questionnaire designed by the researcher and a semi-structured interview was used. The content validity of the data was confirmed by training management experts and the reliability was calculated through 0.92 Cronbach’s alpha. To analyze the data in descriptive statistics aspect (tables, frequency, frequency percentage and mean) were applied, and inferential statistics (Mann Whitney and Wilcoxon tests, Kruskal-Wallis test to determine the significance of the opinion of the groups) have been applied. Results of the study indicated that all groups, i.e., participants, supervisors and training experts, absolutely believe in the importance of training courses; however, participants in training courses regard content, teacher, atmosphere and facilities, training process, managing process and product as to be in a relatively appropriate level. The supervisors also regard output to be at a relatively appropriate level, but training experts regard content, teacher and managing processes as to be in an appropriate and higher than average level.

Keywords: training courses, limitations of training effectiveness, CIPP model, Isfahan oil refinery company

Procedia PDF Downloads 68
4210 An Experimental Approach to the Influence of Tipping Points and Scientific Uncertainties in the Success of International Fisheries Management

Authors: Jules Selles

Abstract:

The Atlantic and Mediterranean bluefin tuna fishery have been considered as the archetype of an overfished and mismanaged fishery. This crisis has demonstrated the role of public awareness and the importance of the interactions between science and management about scientific uncertainties. This work aims at investigating the policy making process associated with a regional fisheries management organization. We propose a contextualized computer-based experimental approach, in order to explore the effects of key factors on the cooperation process in a complex straddling stock management setting. Namely, we analyze the effects of the introduction of a socio-economic tipping point and the uncertainty surrounding the estimation of the resource level. Our approach is based on a Gordon-Schaefer bio-economic model which explicitly represents the decision making process. Each participant plays the role of a stakeholder of ICCAT and represents a coalition of fishing nations involved in the fishery and decide unilaterally a harvest policy for the coming year. The context of the experiment induces the incentives for exploitation and collaboration to achieve common sustainable harvest plans at the Atlantic bluefin tuna stock scale. Our rigorous framework allows testing how stakeholders who plan the exploitation of a fish stock (a common pool resource) respond to two kinds of effects: i) the inclusion of a drastic shift in the management constraints (beyond a socio-economic tipping point) and ii) an increasing uncertainty in the scientific estimation of the resource level.

Keywords: economic experiment, fisheries management, game theory, policy making, Atlantic Bluefin tuna

Procedia PDF Downloads 251
4209 Mobile Network Users Amidst Ultra-Dense Networks in 5G Using an Improved Coordinated Multipoint (CoMP) Technology

Authors: Johnson O. Adeogo, Ayodele S. Oluwole, O. Akinsanmi, Olawale J. Olaluyi

Abstract:

In this 5G network, very high traffic density in densely populated areas, most especially in densely populated areas, is one of the key requirements. Radiation reduction becomes one of the major concerns to secure the future life of mobile network users in ultra-dense network areas using an improved coordinated multipoint technology. Coordinated Multi-Point (CoMP) is based on transmission and/or reception at multiple separated points with improved coordination among them to actively manage the interference for the users. Small cells have two major objectives: one, they provide good coverage and/or performance. Network users can maintain a good quality signal network by directly connecting to the cell. Two is using CoMP, which involves the use of multiple base stations (MBS) to cooperate by transmitting and/or receiving at the same time in order to reduce the possibility of electromagnetic radiation increase. Therefore, the influence of the screen guard with rubber condom on the mobile transceivers as one major piece of equipment radiating electromagnetic radiation was investigated by mobile network users amidst ultra-dense networks in 5g. The results were compared with the same mobile transceivers without screen guards and rubber condoms under the same network conditions. The 5 cm distance from the mobile transceivers was measured with the help of a ruler, and the intensity of Radio Frequency (RF) radiation was measured using an RF meter. The results show that the intensity of radiation from various mobile transceivers without screen guides and condoms was higher than the mobile transceivers with screen guides and condoms when call conversation was on at both ends.

Keywords: ultra-dense networks, mobile network users, 5g, coordinated multi-point.

Procedia PDF Downloads 94
4208 Classification on Statistical Distributions of a Complex N-Body System

Authors: David C. Ni

Abstract:

Contemporary models for N-body systems are based on temporal, two-body, and mass point representation of Newtonian mechanics. Other mainstream models include 2D and 3D Ising models based on local neighborhood the lattice structures. In Quantum mechanics, the theories of collective modes are for superconductivity and for the long-range quantum entanglement. However, these models are still mainly for the specific phenomena with a set of designated parameters. We are therefore motivated to develop a new construction directly from the complex-variable N-body systems based on the extended Blaschke functions (EBF), which represent a non-temporal and nonlinear extension of Lorentz transformation on the complex plane – the normalized momentum spaces. A point on the complex plane represents a normalized state of particle momentums observed from a reference frame in the theory of special relativity. There are only two key parameters, normalized momentum and nonlinearity for modelling. An algorithm similar to Jenkins-Traub method is adopted for solving EBF iteratively. Through iteration, the solution sets show a form of σ + i [-t, t], where σ and t are the real numbers, and the [-t, t] shows various distributions, such as 1-peak, 2-peak, and 3-peak etc. distributions and some of them are analog to the canonical distributions. The results of the numerical analysis demonstrate continuum-to-discreteness transitions, evolutional invariance of distributions, phase transitions with conjugate symmetry, etc., which manifest the construction as a potential candidate for the unification of statistics. We hereby classify the observed distributions on the finite convergent domains. Continuous and discrete distributions both exist and are predictable for given partitions in different regions of parameter-pair. We further compare these distributions with canonical distributions and address the impacts on the existing applications.

Keywords: blaschke, lorentz transformation, complex variables, continuous, discrete, canonical, classification

Procedia PDF Downloads 308
4207 Trinary Affinity—Mathematic Verification and Application (1): Construction of Formulas for the Composite and Prime Numbers

Authors: Liang Ming Zhong, Yu Zhong, Wen Zhong, Fei Fei Yin

Abstract:

Trinary affinity is a description of existence: every object exists as it is known and spoken of, in a system of 2 differences (denoted dif1, dif₂) and 1 similarity (Sim), equivalently expressed as dif₁ / Sim / dif₂ and kn / 0 / tkn (kn = the known, tkn = the 'to be known', 0 = the zero point of knowing). They are mathematically verified and illustrated in this paper by the arrangement of all integers onto 3 columns, where each number exists as a difference in relation to another number as another difference, and the 2 difs as arbitrated by a third number as the Sim, resulting in a trinary affinity or trinity of 3 numbers, of which one is the known, the other the 'to be known', and the third the zero (0) from which both the kn and tkn are measured and specified. Consequently, any number is horizontally specified either as 3n, or as '3n – 1' or '3n + 1', and vertically as 'Cn + c', so that any number seems to occur at the intersection of its X and Y axes and represented by its X and Y coordinates, as any point on Earth’s surface by its latitude and longitude. Technically, i) primes are viewed and treated as progenitors, and composites as descending from them, forming families of composites, each capable of being measured and specified from its own zero called in this paper the realistic zero (denoted 0r, as contrasted to the mathematic zero, 0m), which corresponds to the constant c, and the nature of which separates the composite and prime numbers, and ii) any number is considered as having a magnitude as well as a position, so that a number is verified as a prime first by referring to its descriptive formula and then by making sure that no composite number can possibly occur on its position, by dividing it with factors provided by the composite number formulas. The paper consists of 3 parts: 1) a brief explanation of the trinary affinity of things, 2) the 8 formulas that represent ALL the primes, and 3) families of composite numbers, each represented by a formula. A composite number family is described as 3n + f₁‧f₂. Since there are an infinitely large number of composite number families, to verify the primality of a great probable prime, we have to have it divided with several or many a f₁ from a range of composite number formulas, a procedure that is as laborious as it is the surest way to verifying a great number’s primality. (So, it is possible to substitute planned division for trial division.)

Keywords: trinary affinity, difference, similarity, realistic zero

Procedia PDF Downloads 207
4206 A Handheld Light Meter Device for Methamphetamine Detection in Oral Fluid

Authors: Anindita Sen

Abstract:

Oral fluid is a promising diagnostic matrix for drugs of abuse compared to urine and serum. Detection of methamphetamine in oral fluid would pave way for the easy evaluation of impairment in drivers during roadside drug testing as well as ensure safe working environments by facilitating evaluation of impairment in employees at workplaces. A membrane-based point-of-care (POC) friendly pre-treatment technique has been developed which aided elimination of interferences caused by salivary proteins and facilitated the demonstration of methamphetamine detection in saliva using a gold nanoparticle based colorimetric aptasensor platform. It was found that the colorimetric response in saliva was always suppressed owing to the matrix effects. By navigating the challenging interfering issues in saliva, we were successfully able to detect methamphetamine at nanomolar levels in saliva offering immense promise for the translation of these platforms for on-site diagnostic systems. This subsequently motivated the development of a handheld portable light meter device that can reliably transduce the aptasensors colorimetric response into absorbance, facilitating quantitative detection of analyte concentrations on-site. This is crucial due to the prevalent unreliability and sensitivity problems of the conventional drug testing kits. The fabricated light meter device response was validated against a standard UV-Vis spectrometer to confirm reliability. The portable and cost-effective handheld detector device features sensitivity comparable to the well-established UV-Vis benchtop instrument and the easy-to-use device could potentially serve as a prototype for a commercial device in the future.

Keywords: aptasensors, colorimetric gold nanoparticle assay, point-of-care, oral fluid

Procedia PDF Downloads 49
4205 Ferromagnetic Potts Models with Multi Site Interaction

Authors: Nir Schreiber, Reuven Cohen, Simi Haber

Abstract:

The Potts model has been widely explored in the literature for the last few decades. While many analytical and numerical results concern with the traditional two site interaction model in various geometries and dimensions, little is yet known about models where more than two spins simultaneously interact. We consider a ferromagnetic four site interaction Potts model on the square lattice (FFPS), where the four spins reside in the corners of an elementary square. Each spin can take an integer value 1,2,...,q. We write the partition function as a sum over clusters consisting of monochromatic faces. When the number of faces becomes large, tracing out spin configurations is equivalent to enumerating large lattice animals. It is known that the asymptotic number of animals with k faces is governed by λᵏ, with λ ≈ 4.0626. Based on this observation, systems with q < 4 and q > 4 exhibit a second and first order phase transitions, respectively. The transition nature of the q = 4 case is borderline. For any q, a critical giant component (GC) is formed. In the finite order case, GC is simple, while it is fractal when the transition is continuous. Using simple equilibrium arguments, we obtain a (zero order) bound on the transition point. It is claimed that this bound should apply for other lattices as well. Next, taking into account higher order sites contributions, the critical bound becomes tighter. Moreover, for q > 4, if corrections due to contributions from small clusters are negligible in the thermodynamic limit, the improved bound should be exact. The improved bound is used to relate the critical point to the finite correlation length. Our analytical predictions are confirmed by an extensive numerical study of FFPS, using the Wang-Landau method. In particular, the q=4 marginal case is supported by a very ambiguous pseudo-critical finite size behavior.

Keywords: entropic sampling, lattice animals, phase transitions, Potts model

Procedia PDF Downloads 157
4204 An Energy and Economic Comparison of Solar Thermal Collectors for Domestic Hot Water Applications

Authors: F. Ghani, T. S. O’Donovan

Abstract:

Today, the global solar thermal market is dominated by two collector types; the flat plate and evacuated tube collector. With regards to the number of installations worldwide, the evacuated tube collector is the dominant variant primarily due to the Chinese market but the flat plate collector dominates both the Australian and European markets. The market share of the evacuated tube collector is, however, growing in Australia due to a common belief that this collector type is ‘more efficient’ and, therefore, the better choice for hot water applications. In this study, we investigate this issue further to assess the validity of this statement. This was achieved by methodically comparing the performance and economics of several solar thermal systems comprising of; a low-performance flat plate collector, a high-performance flat collector, and an evacuated tube collector coupled with a storage tank and pump. All systems were simulated using the commercial software package Polysun for four climate zones in Australia to take into account different weather profiles in the study and subjected to a thermal load equivalent to a household comprising of four people. Our study revealed that the energy savings and payback periods varied significantly for systems operating under specific environmental conditions. Solar fractions ranged between 58 and 100 per cent, while payback periods range between 3.8 and 10.1 years. Although the evacuated tube collector was found to operate with a marginally higher thermal efficiency over the selective surface flat plate collector due to reduced ambient heat loss, the high-performance flat plate collector outperformed the evacuated tube collector on thermal yield. This result was obtained as the flat plate collector possesses a significantly higher absorber to gross collector area ratio over the evacuated tube collector. Furthermore, it was found for Australian regions operating with a high average solar radiation intensity and ambient temperature, the lower performance collector is the preferred choice due to favorable economics and reduced stagnation temperature. Our study has provided additional insight into the thermal performance and economics of the two prevalent solar thermal collectors currently available. A computational investigation has been carried out specifically for the Australian climate due to its geographic size and significant variation in weather. For domestic hot water applications were fluid temperatures between 50 and 60 degrees Celsius are sought, the flat plate collector is both technically and economically favorable over the evacuated tube collector. This research will be useful to system design engineers, solar thermal manufacturers, and those involved in policy to encourage the implementation of solar thermal systems into the hot water market.

Keywords: solar thermal, energy analysis, flat plate, evacuated tube, collector performance

Procedia PDF Downloads 207
4203 Blueprinting of a Normalized Supply Chain Processes: Results in Implementing Normalized Software Systems

Authors: Bassam Istanbouli

Abstract:

With the technology evolving every day and with the increase in global competition, industries are always under the pressure to be the best. They need to provide good quality products at competitive prices, when and how the customer wants them.  In order to achieve this level of service, products and their respective supply chain processes need to be flexible and evolvable; otherwise changes will be extremely expensive, slow and with many combinatorial effects. Those combinatorial effects impact the whole organizational structure, from a management, financial, documentation, logistics and specially the information system Enterprise Requirement Planning (ERP) perspective. By applying the normalized system concept/theory to segments of the supply chain, we believe minimal effects, especially at the time of launching an organization global software project. The purpose of this paper is to point out that if an organization wants to develop a software from scratch or implement an existing ERP software for their business needs and if their business processes are normalized and modular then most probably this will yield to a normalized and modular software system that can be easily modified when the business evolves. Another important goal of this paper is to increase the awareness regarding the design of the business processes in a software implementation project. If the blueprints created are normalized then the software developers and configurators will use those modular blueprints to map them into modular software. This paper only prepares the ground for further studies;  the above concept will be supported by going through the steps of developing, configuring and/or implementing a software system for an organization by using two methods: The Software Development Lifecycle method (SDLC) and the Accelerated SAP implementation method (ASAP). Both methods start with the customer requirements, then blue printing of its business processes and finally mapping those processes into a software system.  Since those requirements and processes are the starting point of the implementation process, then normalizing those processes will end up in a normalizing software.

Keywords: blueprint, ERP, modular, normalized

Procedia PDF Downloads 135
4202 Changes in Geospatial Structure of Households in the Czech Republic: Findings from Population and Housing Census

Authors: Jaroslav Kraus

Abstract:

Spatial information about demographic processes are a standard part of outputs in the Czech Republic. That was also the case of Population and Housing Census which was held on 2011. This is a starting point for a follow up study devoted to two basic types of households: single person households and households of one completed family. Single person households and one family households create more than 80 percent of all households, but the share and spatial structure is in long-term changing. The increase of single households is results of long-term fertility decrease and divorce increase, but also possibility of separate living. There are regions in the Czech Republic with traditional demographic behavior, and regions like capital Prague and some others with changing pattern. Population census is based - according to international standards - on the concept of currently living population. Three types of geospatial approaches will be used for analysis: (i) firstly measures of geographic distribution, (ii) secondly mapping clusters to identify the locations of statistically significant hot spots, cold spots, spatial outliers, and similar features and (iii) finally analyzing pattern approach as a starting point for more in-depth analyses (geospatial regression) in the future will be also applied. For analysis of this type of data, number of households by types should be distinct objects. All events in a meaningful delimited study region (e.g. municipalities) will be included in an analysis. Commonly produced measures of central tendency and spread will include: identification of the location of the center of the point set (by NUTS3 level); identification of the median center and standard distance, weighted standard distance and standard deviational ellipses will be also used. Identifying that clustering exists in census households datasets does not provide a detailed picture of the nature and pattern of clustering but will be helpful to apply simple hot-spot (and cold spot) identification techniques to such datasets. Once the spatial structure of households will be determined, any particular measure of autocorrelation can be constructed by defining a way of measuring the difference between location attribute values. The most widely used measure is Moran’s I that will be applied to municipal units where numerical ratio is calculated. Local statistics arise naturally out of any of the methods for measuring spatial autocorrelation and will be applied to development of localized variants of almost any standard summary statistic. Local Moran’s I will give an indication of household data homogeneity and diversity on a municipal level.

Keywords: census, geo-demography, households, the Czech Republic

Procedia PDF Downloads 95
4201 World Peace and Conflict Resolution: A Solution from a Buddhist Point of View

Authors: Samitharathana R. Wadigala

Abstract:

The peace will not be established until the self-consciousness would reveal in the human beings. In this nuclear age, the establishment of a lasting peace on the earth represents the primary condition for the preservation of human civilization and survival of human beings. Nothing perhaps is so important and indispensable as the achievement and maintenance of peace in the modern world today. Peace in today’s world implies much more than the mere absence of war and violence. In the interdependent world of today the United Nations needs to be representative of the modern world and democratic in its functioning because it came into existence to save the generations from the scourge of war and conflict. Buddhism is the religion of peaceful co-existence and philosophy of enlightenment. Violence and conflict from the perspective of the Buddhist theory of interdependent origination (Paṭiccasamuppāda) are same with everything else in the world a product of causes and conditions. Buddhism is totally compatible with the congenial and peaceful global order. The canonical literature, doctrines, and philosophy of Buddhism are the best suited for inter-faith dialogue, harmony, and universal peace. Even today Buddhism can resurrect the universal brotherhood, peaceful co-existence and harmonious surroundings in the comity of nations. With its increasing vitality in regions around the world, many people today turn to Buddhism for relief and guidance at the time when peace seems to be a deferred dream more than ever. From a Buddhist point of view the roots of all unwholesome actions (Conflict) i. e. greed, hatred and delusion are viewed as the root cause of all human conflicts. Conflict often emanates from attachment to material things: pleasures, property, territory, wealth, economic dominance or political superiority. Buddhism has some particularly rich resources for deployment in dissolving conflict. Buddhism addresses the Buddhist perspective on the causes of conflict and ways to resolve conflict to realize world peace. The world has enough to satisfy every body’s needs but not every body’s greed.

Keywords: Buddhism, conflict-violence, peace, self-consciousness

Procedia PDF Downloads 207
4200 Demand Forecasting to Reduce Dead Stock and Loss Sales: A Case Study of the Wholesale Electric Equipment and Part Company

Authors: Korpapa Srisamai, Pawee Siriruk

Abstract:

The purpose of this study is to forecast product demands and develop appropriate and adequate procurement plans to meet customer needs and reduce costs. When the product exceeds customer demands or does not move, it requires the company to support insufficient storage spaces. Moreover, some items, when stored for a long period of time, cause deterioration to dead stock. A case study of the wholesale company of electronic equipment and components, which has uncertain customer demands, is considered. The actual purchasing orders of customers are not equal to the forecast provided by the customers. In some cases, customers have higher product demands, resulting in the product being insufficient to meet the customer's needs. However, some customers have lower demands for products than estimates, causing insufficient storage spaces and dead stock. This study aims to reduce the loss of sales opportunities and the number of remaining goods in the warehouse, citing 30 product samples of the company's most popular products. The data were collected during the duration of the study from January to October 2022. The methods used to forecast are simple moving averages, weighted moving average, and exponential smoothing methods. The economic ordering quantity and reorder point are used to calculate to meet customer needs and track results. The research results are very beneficial to the company. The company can reduce the loss of sales opportunities by 20% so that the company has enough products to meet customer needs and can reduce unused products by up to 10% dead stock. This enables the company to order products more accurately, increasing profits and storage space.

Keywords: demand forecast, reorder point, lost sale, dead stock

Procedia PDF Downloads 113
4199 Interpreting Possibilities: Teaching Without Borders

Authors: Mira Kadric

Abstract:

The proposed paper deals with a new developed approach for interpreting teaching, combining traditional didactics with a new element. The fundamental principle of the approach is taken from the theatre pedagogy (Augusto Boal`s Theatre of the Oppressed) and includes the discussion on social power relations. From the point of view of education sociology this implies strengthening students’ individual potential for self-determination on a number of levels, especially in view of the present increase in social responsibility. This knowledge constitutes a starting point and basis for the process of self-determined action. This takes place in the context of a creative didactic policy which identifies didactic goals, provides clear sequences of content, specifies interdisciplinary methods and examines their practical adequacy and ultimately serves not only individual translators and interpreters, but all parties involved. The goal of the presented didactic model is to promote independent work and problem-solving strategies; this helps to develop creative potential and self-confident behaviour. It also conveys realistic knowledge of professional reality and thus also of the real socio-political and professional parameters involved. As well as providing a discussion of fundamental questions relevant to Translation and Interpreting Studies, this also serves to improve this interdisciplinary didactic approach which simulates interpreting reality and illustrates processes and strategies which (can) take place in real life. This idea is illustrated in more detail with methods taken from the Theatre of the Oppressed created by Augusto Boal. This includes examples from (dialogue) interpreting teaching based on documentation from recordings made in a seminar in the summer term 2014.

Keywords: augusto boal, didactic model, interpreting teaching, theatre of the oppressed

Procedia PDF Downloads 422
4198 A New Center of Motion in Cabling Robots

Authors: Alireza Abbasi Moshaii, Farshid Najafi

Abstract:

In this paper a new model for centre of motion creating is proposed. This new method uses cables. So, it is very useful in robots because it is light and has easy assembling process. In the robots which need to be in touch with some things this method is very good. It will be described in the following. The accuracy of the idea is proved by an experiment. This system could be used in the robots which need a fixed point in the contact with some things and make a circular motion. Such as dancer, physician or repair robots.

Keywords: centre of motion, robotic cables, permanent touching, mechatronics engineering

Procedia PDF Downloads 436
4197 Laboratory Scale Purification of Water from Copper Waste

Authors: Mumtaz Khan, Adeel Shahid, Waqas Khan

Abstract:

Heavy metals presence in water streams is a big danger for aquatic life and ultimately effects human health. Removal of copper (Cu) by ispaghula husk, maize fibre, and maize oil cake from synthetic solution in batch conditions was studied. Different experimental parameters such as contact time, initial solution pH, agitation rate, initial Cu concentration, biosorbent concentration, and biosorbent particle size has been studied to quantify the Cu biosorption. The rate of adsorption of metal ions was very fast at the beginning and became slow after reaching the saturation point, followed by a slower active metabolic uptake of metal ions into the cells. Up to a certain point, (pH=4, concentration of Cu = ~ 640 mg/l, agitation rate = ~ 400 rpm, biosorbent concentration = ~ 0.5g, 3g, 3g for ispaghula husk, maize fiber and maize oil cake, respectively) increasing the pH, concentration of Cu, agitation rate, and biosorbent concentration, increased the biosorption rate; however the sorption capacity increased by decreasing the particle size. At optimized experimental parameters, the maximum Cu biosorption by ispaghula husk, maize fibre and maize oil cake were 86.7%, 59.6% and 71.3%, respectively. Moreover, the results of the kinetics studies demonstrated that the biosorption of copper on ispaghula husk, maize fibre, and maize oil cake followed pseudo-second order kinetics. The results of adsorption were fitted to both the Langmuir and Freundlich models. The Langmuir model represented the sorption process better than Freundlich, and R² value ~ 0.978. Optimizations of physical and environmental parameters revealed, ispaghula husk as more potent copper biosorbent than maize fibre, and maize oil cake. The sorbent is cheap and available easily, so this study can be applied to remove Cu impurities on pilot and industrial scale after certain modifications.

Keywords: biosorption, copper, ispaghula husk, maize fibre, maize oil cake, purification

Procedia PDF Downloads 406
4196 The Evaluation for Interfacial Adhesion between SOFC and Metal Adhesive in the High Temperature Environment

Authors: Sang Koo Jeon, Seung Hoon Nahm, Oh Heon Kwon

Abstract:

The unit cell of solid oxide fuel cell (SOFC) must be stacked as several layers type to obtain the high power. The most of researcher have concerned about the performance of stacked SOFC rather than the structural stability of stacked SOFC and especially interested how to design for reducing the electrical loss and improving the high efficiency. Consequently, the stacked SOFC able to produce the electrical high power and related parts like as manifold, gas seal, bipolar plate were developed to optimize the stack design. However, the unit cell of SOFC was just layered on the interconnector without the adhesion and the hydrogen and oxygen were injected to the interfacial layer in the high temperature. On the operating condition, the interfacial layer can be the one of the weak point in the stacked SOFC. Therefore the evaluation of the structural safety for the failure is essentially needed. In this study, interfacial adhesion between SOFC and metal adhesive was estimated in the high temperature environment. The metal adhesive was used to strongly connect the unit cell of SOFC with interconnector and provide the electrical conductivity between them. The four point bending test was performed to measure the interfacial adhesion. The unit cell of SOFC and SiO2 wafer were diced and then attached by metal adhesive. The SiO2 wafer had the center notch to initiate a crack from the tip of the notch. The modified stereomicroscope combined with the CCD camera and system for measuring the length was used to observe the fracture behavior. Additionally, the interfacial adhesion was evaluated in the high temperature condition because the metal adhesive was affected by high temperature. Also the specimen was exposed in the furnace during several hours and then the interfacial adhesion was evaluated. Finally, the interfacial adhesion energy was quantitatively determined and compared in the each condition.

Keywords: solid oxide fuel cell (SOFC), metal adhesive, adhesion, high temperature

Procedia PDF Downloads 519
4195 Amblyopia and Eccentric Fixation

Authors: Kristine Kalnica-Dorosenko, Aiga Svede

Abstract:

Amblyopia or 'lazy eye' is impaired or dim vision without obvious defect or change in the eye. It is often associated with abnormal visual experience, most commonly strabismus, anisometropia or both, and form deprivation. The main task of amblyopia treatment is to ameliorate etiological factors to create a clear retinal image and, to ensure the participation of the amblyopic eye in the visual process. The treatment of amblyopia and eccentric fixation is usually associated with problems in the therapy. Eccentric fixation is present in around 44% of all patients with amblyopia and in 30% of patients with strabismic amblyopia. In Latvia, amblyopia is carefully treated in various clinics, but eccentricity diagnosis is relatively rare. Conflict which has developed relating to the relationship between the visual disorder and the degree of eccentric fixation in amblyopia should to be rethoughted, because it has an important bearing on the cause and treatment of amblyopia, and the role of the eccentric fixation in this case. Visuoscopy is the most frequently used method for determination of eccentric fixation. With traditional visuoscopy, a fixation target is projected onto the patient retina, and the examiner asks to look straight directly at the center of the target. An optometrist then observes the point on the macula used for fixation. This objective test provides clinicians with direct observation of the fixation point of the eye. It requires patients to voluntarily fixate the target and assumes the foveal reflex accurately demarcates the center of the foveal pit. In the end, by having a very simple method to evaluate fixation, it is possible to indirectly evaluate treatment improvement, as eccentric fixation is always associated with reduced visual acuity. So, one may expect that if eccentric fixation in amlyopic eye is found with visuoscopy, then visual acuity should be less than 1.0 (in decimal units). With occlusion or another amblyopia therapy, one would expect both visual acuity and fixation to improve simultaneously, that is fixation would become more central. Consequently, improvement in fixation pattern by treatment is an indirect measurement of improvement of visual acuity. Evaluation of eccentric fixation in the child may be helpful in identifying amblyopia in children prior to measurement of visual acuity. This is very important because the earlier amblyopia is diagnosed – the better the chance of improving visual acuity.

Keywords: amblyopia, eccentric fixation, visual acuity, visuoscopy

Procedia PDF Downloads 155
4194 Sensitivity Assessment of Spectral Salinity Indices over Desert Sabkha of Western UAE

Authors: Rubab Ammad, Abdelgadir Abuelgasim

Abstract:

UAE typically lies in one of the aridest regions of the world and is thus home to geologic features common to such climatic conditions including vast open deserts, sand dunes, saline soils, inland Sabkha and coastal Sabkha. Sabkha are characteristic salt flats formed in arid environment due to deposition and precipitation of salt and silt over sand surface because of low laying water table and rates of evaporation exceeding rates of precipitation. The study area, which comprises of western UAE, is heavily concentrated with inland Sabkha. Remote sensing is conventionally used to study the soil salinity of agriculturally degraded lands but not so broadly for Sabkha. The focus of this study was to identify these highly saline Sabkha areas on remotely sensed data, using salinity indices. The existing salinity indices in the literature have been designed for agricultural soils and they have not frequently used the spectral response of short-wave infra-red (SWIR1 and SWIR2) parts of electromagnetic spectrum. Using Landsat 8 OLI data and field ground truthing, this study formulated indices utilizing NIR-SWIR parts of spectrum and compared the results with existing salinity indices. Most indices depict reasonably good relationship between salinity and spectral index up until a certain value of salinity after which the reflectance reaches a saturation point. This saturation point varies with index. However, the study findings suggest a role of incorporating near infra-red and short-wave infra-red in salinity index with a potential of showing a positive relationship between salinity and reflectance up to a higher salinity value, compared to rest.

Keywords: Sabkha, salinity index, saline soils, Landsat 8, SWIR1, SWIR2, UAE desert

Procedia PDF Downloads 207
4193 The Psychology of Cross-Cultural Communication: A Socio-Linguistics Perspective

Authors: Tangyie Evani, Edmond Biloa, Emmanuel Nforbi, Lem Lilian Atanga, Kom Beatrice

Abstract:

The dynamics of languages in contact necessitates a close study of how its users negotiate meanings from shared values in the process of cross-cultural communication. A transverse analysis of the situation demonstrates the existence of complex efforts on connecting cultural knowledge to cross-linguistic competencies within a widening range of communicative exchanges. This paper sets to examine the psychology of cross-cultural communication in a multi-linguistic setting like Cameroon where many local and international languages are in close contact. The paper equally analyses the pertinence of existing macro sociological concepts as fundamental knowledge traits in literal and idiomatic cross semantic mapping. From this point, the article presents a path model of connecting sociolinguistics to the increasing adoption of a widening range of communicative genre piloted by the on-going globalisation trends with its high-speed information technology machinery. By applying a cross cultural analysis frame, the paper will be contributing to a better understanding of the fundamental changes in the nature and goals of cross-cultural knowledge in pragmatics of communication and cultural acceptability’s. It emphasises on the point that, in an era of increasing global interchange, a comprehensive inclusive global culture through bridging gaps in cross-cultural communication would have significant potentials to contribute to achieving global social development goals, if inadequacies in language constructs are adjusted to create avenues that intertwine with sociocultural beliefs, ensuring that meaningful and context bound sociolinguistic values are observed within the global arena of communication.

Keywords: cross-cultural communication, customary language, literalisms, primary meaning, subclasses, transubstantiation

Procedia PDF Downloads 279
4192 Dynamic and Thermal Characteristics of Three-Dimensional Turbulent Offset Jet

Authors: Ali Assoudi, Sabra Habli, Nejla Mahjoub Saïd, Philippe Bournot, Georges Le Palec

Abstract:

Studying the flow characteristics of a turbulent offset jet is an important topic among researchers across the world because of its various engineering applications. Some of the common examples include: injection and carburetor systems, entrainment and mixing process in gas turbine and boiler combustion chambers, Thrust-augmenting ejectors for V/STOL aircrafts and HVAC systems, environmental dischargers, film cooling and many others. An offset jet is formed when a jet discharges into a medium above a horizontal solid wall parallel to the axis of the jet exit but which is offset by a certain distance. The structure of a turbulent offset-jet can be described by three main regions. Close to the nozzle exit, an offset jet possesses characteristic features similar to those of free jets. Then, the entrainment of fluid between the jet, the offset wall and the bottom wall creates a low pressure zone, forcing the jet to deflect towards the wall and eventually attaches to it at the impingement point. This is referred to as the Coanda effect. Further downstream after the reattachment point, the offset jet has the characteristics of a wall jet flow. Therefore, the offset jet has characteristics of free, impingement and wall jets, and it is relatively more complex compared to these types of flows. The present study examines the dynamic and thermal evolution of a 3D turbulent offset jet with different offset height ratio (the ratio of the distance from the jet exit to the impingement bottom wall and the jet nozzle diameter). To achieve this purpose a numerical study was conducted to investigate a three-dimensional offset jet flow through the resolution of the different governing Navier–Stokes’ equations by means of the finite volume method and the RSM second-order turbulent closure model. A detailed discussion has been provided on the flow and thermal characteristics in the form of streamlines, mean velocity vector, pressure field and Reynolds stresses.

Keywords: offset jet, offset ratio, numerical simulation, RSM

Procedia PDF Downloads 302