Search results for: generating sets
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2200

Search results for: generating sets

1960 Policy Recommendations for Reducing CO2 Emissions in Kenya's Electricity Generation, 2015-2030

Authors: Paul Kipchumba

Abstract:

Kenya is an East African Country lying at the Equator. It had a population of 46 million in 2015 with an annual growth rate of 2.7%, making a population of at least 65 million in 2030. Kenya’s GDP in 2015 was about 63 billion USD with per capita GDP of about 1400 USD. The rural population is 74%, whereas urban population is 26%. Kenya grapples with not only access to energy but also with energy security. There is direct correlation between economic growth, population growth, and energy consumption. Kenya’s energy composition is at least 74.5% from renewable energy with hydro power and geothermal forming the bulk of it; 68% from wood fuel; 22% from petroleum; 9% from electricity; and 1% from coal and other sources. Wood fuel is used by majority of rural and poor urban population. Electricity is mostly used for lighting. As of March 2015 Kenya had installed electricity capacity of 2295 MW, making a per capital electricity consumption of 0.0499 KW. The overall retail cost of electricity in 2015 was 0.009915 USD/ KWh (KES 19.85/ KWh), for installed capacity over 10MW. The actual demand for electricity in 2015 was 3400 MW and the projected demand in 2030 is 18000 MW. Kenya is working on vision 2030 that aims at making it a prosperous middle income economy and targets 23 GW of generated electricity. However, cost and non-cost factors affect generation and consumption of electricity in Kenya. Kenya does not care more about CO2 emissions than on economic growth. Carbon emissions are most likely to be paid by future costs of carbon emissions and penalties imposed on local generating companies by sheer disregard of international law on C02 emissions and climate change. The study methodology was a simulated application of carbon tax on all carbon emitting sources of electricity generation. It should cost only USD 30/tCO2 tax on all emitting sources of electricity generation to have solar as the only source of electricity generation in Kenya. The country has the best evenly distributed global horizontal irradiation. Solar potential after accounting for technology efficiencies such as 14-16% for solar PV and 15-22% for solar thermal is 143.94 GW. Therefore, the paper recommends adoption of solar power for generating all electricity in Kenya in order to attain zero carbon electricity generation in the country.

Keywords: co2 emissions, cost factors, electricity generation, non-cost factors

Procedia PDF Downloads 355
1959 Development of Automated Quality Management System for the Management of Heat Networks

Authors: Nigina Toktasynova, Sholpan Sagyndykova, Zhanat Kenzhebayeva, Maksat Kalimoldayev, Mariya Ishimova, Irbulat Utepbergenov

Abstract:

Any business needs a stable operation and continuous improvement, therefore it is necessary to constantly interact with the environment, to analyze the work of the enterprise in terms of employees, executives and consumers, as well as to correct any inconsistencies of certain types of processes and their aggregate. In the case of heat supply organizations, in addition to suppliers, local legislation must be considered which often is the main regulator of pricing of services. In this case, the process approach used to build a functional organizational structure in these types of businesses in Kazakhstan is a challenge not only in the implementation, but also in ways of analyzing the employee's salary. To solve these problems, we investigated the management system of heating enterprise, including strategic planning based on the balanced scorecard (BSC), quality management in accordance with the standards of the Quality Management System (QMS) ISO 9001 and analysis of the system based on expert judgment using fuzzy inference. To carry out our work we used the theory of fuzzy sets, the QMS in accordance with ISO 9001, BSC according to the method of Kaplan and Norton, method of construction of business processes according to the notation IDEF0, theory of modeling using Matlab software simulation tools and graphical programming LabVIEW. The results of the work are as follows: We determined possibilities of improving the management of heat-supply plant-based on QMS; after the justification and adaptation of software tool it has been used to automate a series of functions for the management and reduction of resources and for the maintenance of the system up to date; an application for the analysis of the QMS based on fuzzy inference has been created with novel organization of communication software with the application enabling the analysis of relevant data of enterprise management system.

Keywords: balanced scorecard, heat supply, quality management system, the theory of fuzzy sets

Procedia PDF Downloads 362
1958 dynr.mi: An R Program for Multiple Imputation in Dynamic Modeling

Authors: Yanling Li, Linying Ji, Zita Oravecz, Timothy R. Brick, Michael D. Hunter, Sy-Miin Chow

Abstract:

Assessing several individuals intensively over time yields intensive longitudinal data (ILD). Even though ILD provide rich information, they also bring other data analytic challenges. One of these is the increased occurrence of missingness with increased study length, possibly under non-ignorable missingness scenarios. Multiple imputation (MI) handles missing data by creating several imputed data sets, and pooling the estimation results across imputed data sets to yield final estimates for inferential purposes. In this article, we introduce dynr.mi(), a function in the R package, Dynamic Modeling in R (dynr). The package dynr provides a suite of fast and accessible functions for estimating and visualizing the results from fitting linear and nonlinear dynamic systems models in discrete as well as continuous time. By integrating the estimation functions in dynr and the MI procedures available from the R package, Multivariate Imputation by Chained Equations (MICE), the dynr.mi() routine is designed to handle possibly non-ignorable missingness in the dependent variables and/or covariates in a user-specified dynamic systems model via MI, with convergence diagnostic check. We utilized dynr.mi() to examine, in the context of a vector autoregressive model, the relationships among individuals’ ambulatory physiological measures, and self-report affect valence and arousal. The results from MI were compared to those from listwise deletion of entries with missingness in the covariates. When we determined the number of iterations based on the convergence diagnostics available from dynr.mi(), differences in the statistical significance of the covariate parameters were observed between the listwise deletion and MI approaches. These results underscore the importance of considering diagnostic information in the implementation of MI procedures.

Keywords: dynamic modeling, missing data, mobility, multiple imputation

Procedia PDF Downloads 161
1957 Building a Comprehensive Repository for Montreal Gamelan Archives

Authors: Laurent Bellemare

Abstract:

After the showcase of traditional Indonesian performing arts at the Vancouver Expo 1986, Canadian universities inherited sets of Indonesian gamelan orchestras and soon began offering courses for music students interested in learning these diverse traditions. Among them, Université de Montréal was offered two sets of Balinese orchestras, a novelty that allowed a community of Montreal gamelan enthusiasts to form and engage with this music. A few generations later, a large body of archives have amassed, framing the history of this niche community’s achievements. This data, scattered in public and private archive collections, comes in various formats: Digital Audio Tape, audio cassettes, Video Home System videotape, digital files, photos, reel-to-reel audiotape, posters, concert programs, letters, TV shows, reports and more. Attempting to study these documents in order to unearth a chronology of gamelan in Montreal has proven to be challenging since no suitable platform for preservation, storage, and research currently exists. These files are, therefore, hard to find due to their decentralized locations. Additionally, most of the documents in older formats have yet to be digitized. In the case of recent digital files, such as pictures or rehearsal recordings, their locations can be even messier and their quantity overwhelming. Aside from the basic issue of choosing a suitable repository platform, questions of legal rights and methodology arise. For posterity, these documents should nonetheless be digitized, organized, and stored in an easily accessible online repository. This paper aims to underline the various challenges encountered in the early stages of such a project as well as to suggest ways of overcoming the obstacles to a thorough archival investigation.

Keywords: archival work, archives, Balinese gamelan, Canada, Gamelan, Indonesia, Javanese gamelan, Montreal

Procedia PDF Downloads 113
1956 The Influence of the Visual and the Direct Physical Accessibility on the Sense of Control of Saudi Women in the Home Environment

Authors: Ahdab H. Mahdaly, Debajyoti Pati, Sharran Parkinson, Lee S. Duemer

Abstract:

The importance of providing employed mothers with the right physical environment inside the home is not an easy task, especially when the culture is involved. This study examines the typical Saudi home as a personal, emotional, social and cultural setting, especially on the interactions between the physical design and perceived control of working mothers. However, owing to the scarcity of published literature on Saudi homes, American employed mothers were included in the study to provide a baseline. With the ongoing transformations in women’s role in Saudi Arabia, there is a perception that traditional home designs may not afford the appropriate sense of control inside the home. Saudi Arabia has numerous interacting layers of socio-cultural-religious forces that affect residential design, and understanding the moderating role of the Saudi home is vital to the ongoing national policy transition on women. The study investigated one narrow, albeit critical, influence of home design on ones sense of control – direct visual and physical accessibility between sets of rooms. Ten subjects, five Saudis and five American, examined visual and physical access between 171 room sets, and provided qualitative responses on how each access influences their sense of control. Three main themes emerged, with potential effects on control: 1- Openness, 2- Proximity, and 3- Separation. Data suggest that although the Saudi home is a substantially more complex setting than the American ones, a class of spaces that can be termed as ‘Neutral Rooms’ serving as cultural separators may represent the ideal solution for optimizing sense of control, without ignoring cultural-religious traditions, during the transition of the Saudi women.

Keywords: direct physical accessibility, home environment, sense of control, visual accessibility, working mothers

Procedia PDF Downloads 308
1955 Determination of Vinpocetine in Tablets with the Vinpocetine-Selective Electrode and Possibilities of Application in Pharmaceutical Analysis

Authors: Faisal A. Salih

Abstract:

Vinpocetine (Vin) is an ethyl ester of apovincamic acid and is a semisynthetic derivative of vincamine, an alkaloid from plants of the genus Periwinkle (plant) vinca minor. It was found that this compound stimulates cerebral metabolism: it increases the uptake of glucose and oxygen, as well as the consumption of these substances by the brain tissue. Vinpocetine enhances the flow of blood in the brain and has a vasodilating, antihypertensive, and antiplatelet effect. Vinpocetine seems to improve the human ability to acquire new memories and restore memories that have been lost. This drug has been clinically used for the treatment of cerebrovascular disorders such as stroke and dementia memory disorders, as well as in ophthalmology and otorhinolaryngology. It has no side effects, and no toxicity has been reported when using vinpocetine for a long time. For the quantitative determination of Vin in dosage forms, the HPLC methods are generally used. A promising alternative is potentiometry with Vin- selective electrode, which does not require expensive equipment and materials. Another advantage of the potentiometric method is that the pills and solutions for injections can be used directly without separation from matrix components, which reduces both analysis time and cost. In this study, it was found that the choice of a good plasticizer an electrode with the following membrane composition: PVC (32.8 wt.%), ortho-nitrophenyl octyl ether (66.6 wt.%), tetrakis-4-chlorophenyl borate (0.6 wt.%) exhibits excellent analytical performance: lower detection limit (LDL) 1.2•10⁻⁷ M, linear response range (LRR) 1∙10⁻³–3.9∙10⁻⁶ M, the slope of the electrode function 56.2±0.2 mV/decade). Vin masses per average tablet weight determined by direct potentiometry (DP) and potentiometric titration (PT) methods for the two different sets of 10 tablets were (100.35±0.2–100.36±0.1) mg for two sets of blister packs. The mass fraction of Vin in individual tablets, determined using DP, was (9.87 ± 0.02–10.16 ±0.02) mg, while the RSD was (0.13–0.35%). The procedure has very good reproducibility, and excellent compliance with the declared amounts was observed.

Keywords: vinpocetine, potentiometry, ion selective electrode, pharmaceutical analysis

Procedia PDF Downloads 61
1954 Generating Arabic Fonts Using Rational Cubic Ball Functions

Authors: Fakharuddin Ibrahim, Jamaludin Md. Ali, Ahmad Ramli

Abstract:

In this paper, we will discuss about the data interpolation by using the rational cubic Ball curve. To generate a curve with a better and satisfactory smoothness, the curve segments must be connected with a certain amount of continuity. The continuity that we will consider is of type G1 continuity. The conditions considered are known as the G1 Hermite condition. A simple application of the proposed method is to generate an Arabic font satisfying the required continuity.

Keywords: data interpolation, rational ball curve, hermite condition, continuity

Procedia PDF Downloads 422
1953 Entrepreneurship Education Revised: Merging a Theory-Based and Action-Based Framework for Entrepreneurial Narratives' Impact as an Awareness-Raising Teaching Tool

Authors: Katharina Fellnhofer, Kaisu Puumalainen

Abstract:

Despite the current worldwide increasing interest in entrepreneurship education (EE), little attention has been paid to innovative web-based ways such as the narrative approach by telling individual stories of entrepreneurs via multimedia for demonstrating the impact on individuals towards entrepreneurship. In addition, this research discipline is faced with no consensus regarding its effective content of teaching materials and tools. Therefore, a qualitative hypothesis-generating research contribution is required to aim at drawing new insights from published works in the EE field of research to serve for future research related to multimedia entrepreneurial narratives. Based on this background, our effort will focus on finding support regarding following introductory statement: Multimedia success and failure stories of real entrepreneurs show potential to change perceptions towards entrepreneurship in a positive way. The proposed qualitative conceptual paper will introduce the underlying background for this research framework. Therefore, as a qualitative hypothesis-generating research contribution it aims at drawing new insights from published works in the EE field of research related to entrepreneurial narratives to serve for future research. With the means of the triangulation of multiple theories, we will utilize the foundation for multimedia-based entrepreneurial narratives applying a learning-through-multimedia-real-entrepreneurial-narratives pedagogical tool to facilitate entrepreneurship. Our effort will help to demystify how value-oriented entrepreneurs telling their stories multimedia can simultaneously enhance EE. Therefore, the paper will build new-fangled bridges between well-cited theoretical constructs to build a robust research framework. Overall, the intended contribution seeks to emphasize future research of currently under-researched issues in the EE sphere, which are considered to be essential not only to academia, as well as to business and society having future jobs-providing growth-oriented entrepreneurs in mind. The Authors would like to thank the Austrian Science Fund FWF: [J3740 – G27].

Keywords: entrepreneurship education, entrepreneurial attitudes and perceptions, entrepreneurial intention, entrepreneurial narratives

Procedia PDF Downloads 251
1952 Investigating the Efficiency of Granular Sludge for Recovery of Phosphate from Wastewater

Authors: Sara Salehi, Ka Yu Cheng, Anna Heitz, Maneesha Ginige

Abstract:

This study investigated the efficiency of granular sludge for phosphorous (P) recovery from wastewater. A laboratory scale sequencing batch reactor (SBR) was operated under alternating aerobic/anaerobic conditions to enrich a P accumulating granular biomass. This study showed that an overall 45-fold increase in P concentration could be achieved by reducing the volume of the P capturing liquor by 5-fold in the anaerobic P release phase. Moreover, different fractions of the granular biomass have different individual contributions towards generating a concentrated stream of P.

Keywords: granular sludge, PAOs, P recovery, SBR

Procedia PDF Downloads 474
1951 Orbit Determination from Two Position Vectors Using Finite Difference Method

Authors: Akhilesh Kumar, Sathyanarayan G., Nirmala S.

Abstract:

An unusual approach is developed to determine the orbit of satellites/space objects. The determination of orbits is considered a boundary value problem and has been solved using the finite difference method (FDM). Only positions of the satellites/space objects are known at two end times taken as boundary conditions. The technique of finite difference has been used to calculate the orbit between end times. In this approach, the governing equation is defined as the satellite's equation of motion with a perturbed acceleration. Using the finite difference method, the governing equations and boundary conditions are discretized. The resulting system of algebraic equations is solved using Tri Diagonal Matrix Algorithm (TDMA) until convergence is achieved. This methodology test and evaluation has been done using all GPS satellite orbits from National Geospatial-Intelligence Agency (NGA) precise product for Doy 125, 2023. Towards this, two hours of twelve sets have been taken into consideration. Only positions at the end times of each twelve sets are considered boundary conditions. This algorithm is applied to all GPS satellites. Results achieved using FDM compared with the results of NGA precise orbits. The maximum RSS error for the position is 0.48 [m] and the velocity is 0.43 [mm/sec]. Also, the present algorithm is applied on the IRNSS satellites for Doy 220, 2023. The maximum RSS error for the position is 0.49 [m], and for velocity is 0.28 [mm/sec]. Next, a simulation has been done for a Highly Elliptical orbit for DOY 63, 2023, for the duration of 6 hours. The RSS of difference in position is 0.92 [m] and velocity is 1.58 [mm/sec] for the orbital speed of more than 5km/sec. Whereas the RSS of difference in position is 0.13 [m] and velocity is 0.12 [mm/sec] for the orbital speed less than 5km/sec. Results show that the newly created method is reliable and accurate. Further applications of the developed methodology include missile and spacecraft targeting, orbit design (mission planning), space rendezvous and interception, space debris correlation, and navigation solutions.

Keywords: finite difference method, grid generation, NavIC system, orbit perturbation

Procedia PDF Downloads 77
1950 Fuzzy Control and Pertinence Functions

Authors: Luiz F. J. Maia

Abstract:

This paper presents an approach to fuzzy control, with the use of new pertinence functions, applied in the case of an inverted pendulum. Appropriate definitions of pertinence functions to fuzzy sets make possible the implementation of the controller with only one control rule, resulting in a smooth control surface. The fuzzy control system can be implemented with analog devices, affording a true real-time performance.

Keywords: control surface, fuzzy control, Inverted pendulum, pertinence functions

Procedia PDF Downloads 442
1949 Structural Properties, Natural Bond Orbital, Theory Functional Calculations (DFT), and Energies for Fluorous Compounds: C13H12F7ClN2O

Authors: Shahriar Ghammamy, Masomeh Shahsavary

Abstract:

In this paper, the optimized geometries and frequencies of the stationary point and the minimum energy paths of C13H12F7ClN2O are calculated by using the DFT (B3LYP) methods with LANL2DZ basis sets. B3LYP/ LANL2DZ calculation results indicated that some selected bond length and bond angles values for the C13H12F7ClN2O.

Keywords: C13H12F7ClN2O, vatural bond orbital, fluorous compounds, functional calculations

Procedia PDF Downloads 329
1948 Experimental and Theoretical Study on Flexural Behaviors of Reinforced Concrete Cement (RCC) Beams by Using Carbonfiber Reinforcedpolymer (CFRP) Laminate as Retrofitting and Rehabilitation Method

Authors: Fils Olivier Kamanzi

Abstract:

This research Paper shows that materials CFRP were used to rehabilitate 9 Beams and retrofitting of 9 Beams with size (125x250x2300) mm each for M50 grade of concrete with 20% of Volume of Cement replaced by GGBS as a mineral Admixture. Superplasticizer (ForscoConplast SP430) used to reduce the water-cement ratio and maintaining good workability of fresh concrete (Slump test 57mm). Concrete Mix ratio 1:1.56:2.66 with a water-cement ratio of 0.31(ACI codebooks). A sample of 6cubes sized (150X150X150) mm, 6cylinders sized (150ФX300H) mm and 6Prisms sized (100X100X500) mm were cast, cured, and tested for 7,14&28days by compressive, tensile and flexure test; finally, mix design reaches the compressive strength of 59.84N/mm2. 21 Beams were cast and cured for up to 28 days, 3Beams were tested by a two-point loading machine as Control beams. 9 Beams were distressed in flexure by adopting failure up to final Yielding point under two-point loading conditions by taking 90% off Ultimate load. Three sets, each composed of three distressed beams, were rehabilitated by using CFRP sheets, one, two & three layers, respectively, and after being retested up to failure mode. Another three sets were freshly retrofitted also by using CFRP sheets one, two & three layers, respectively, and being tested by a two-point load method of compression strength testing machine. The aim of this study is to determine the flexural Strength & behaviors of repaired and retrofitted Beams by CFRP sheets for gaining good strength and considering economic aspects. The results show that rehabilitated beams increase its strength 47 %, 78 % & 89 %, respectively, to thickness of CFRP sheets and 41%, 51 %& 68 %, respectively too, for retrofitted Beams. The conclusion is that three layers of CFRP sheets are the best applicable in repairing and retrofitting the bonded beams method.

Keywords: retrofitting, rehabilitation, cfrp, rcc beam, flexural strength and behaviors, ggbs, and epoxy resin

Procedia PDF Downloads 99
1947 Using Machine Learning to Classify Different Body Parts and Determine Healthiness

Authors: Zachary Pan

Abstract:

Our general mission is to solve the problem of classifying images into different body part types and deciding if each of them is healthy or not. However, for now, we will determine healthiness for only one-sixth of the body parts, specifically the chest. We will detect pneumonia in X-ray scans of those chest images. With this type of AI, doctors can use it as a second opinion when they are taking CT or X-ray scans of their patients. Another ad-vantage of using this machine learning classifier is that it has no human weaknesses like fatigue. The overall ap-proach to this problem is to split the problem into two parts: first, classify the image, then determine if it is healthy. In order to classify the image into a specific body part class, the body parts dataset must be split into test and training sets. We can then use many models, like neural networks or logistic regression models, and fit them using the training set. Now, using the test set, we can obtain a realistic accuracy the models will have on images in the real world since these testing images have never been seen by the models before. In order to increase this testing accuracy, we can also apply many complex algorithms to the models, like multiplicative weight update. For the second part of the problem, to determine if the body part is healthy, we can have another dataset consisting of healthy and non-healthy images of the specific body part and once again split that into the test and training sets. We then use another neural network to train on those training set images and use the testing set to figure out its accuracy. We will do this process only for the chest images. A major conclusion reached is that convolutional neural networks are the most reliable and accurate at image classification. In classifying the images, the logistic regression model, the neural network, neural networks with multiplicative weight update, neural networks with the black box algorithm, and the convolutional neural network achieved 96.83 percent accuracy, 97.33 percent accuracy, 97.83 percent accuracy, 96.67 percent accuracy, and 98.83 percent accuracy, respectively. On the other hand, the overall accuracy of the model that de-termines if the images are healthy or not is around 78.37 percent accuracy.

Keywords: body part, healthcare, machine learning, neural networks

Procedia PDF Downloads 94
1946 Preliminary Evaluation of Maximum Intensity Projection SPECT Imaging for Whole Body Tc-99m Hydroxymethylene Diphosphonate Bone Scanning

Authors: Yasuyuki Takahashi, Hirotaka Shimada, Kyoko Saito

Abstract:

Bone scintigraphy is widely used as a screening tool for bone metastases. However, the 180 to 240 minutes (min) waiting time after the intravenous (i.v.) injection of the tracer is both long and tiresome. To solve this shortcoming, a bone scan with a shorter waiting time is needed. In this study, we applied the Maximum Intensity Projection (MIP) and triple energy window (TEW) scatter correction to a whole body bone SPECT (Merged SPECT) and investigated shortening the waiting time. Methods: In a preliminary phantom study, hot gels of 99mTc-HMDP were inserted into sets of rods with diameters ranging from 4 to 19 mm. Each rod set covered a sector of a cylindrical phantom. The activity concentration of all rods was 2.5 times that of the background in the cylindrical body of the phantom. In the human study, SPECT images were obtained from chest to abdomen at 30 to 180 min after 99mTc- hydroxymethylene diphosphonate (HMDP) injection of healthy volunteers. For both studies, MIP images were reconstructed. Planar whole body images of the patients were also obtained. These were acquired at 200 min. The image quality of the SPECT and the planar images was compared. Additionally, 36 patients with breast cancer were scanned in the same way. The delectability of uptake regions (metastases) was compared visually. Results: In the phantom study, a 4 mm size hot gel was difficult to depict on the conventional SPECT, but MIP images could recognize it clearly. For both the healthy volunteers and the clinical patients, the accumulation of 99mTc-HMDP in the SPECT was good as early as 90 min. All findings of both image sets were in agreement. Conclusion: In phantoms, images from MIP with TEW scatter correction could detect all rods down to those with a diameter of 4 mm. In patients, MIP reconstruction with TEW scatter correction could improve the detectability of hot lesions. In addition, the time between injection and imaging could be shortened from that conventionally used for whole body scans.

Keywords: merged SPECT, MIP, TEW scatter correction, 99mTc-HMDP

Procedia PDF Downloads 408
1945 Implementation of Algorithm K-Means for Grouping District/City in Central Java Based on Macro Economic Indicators

Authors: Nur Aziza Luxfiati

Abstract:

Clustering is partitioning data sets into sub-sets or groups in such a way that elements certain properties have shared property settings with a high level of similarity within one group and a low level of similarity between groups. . The K-Means algorithm is one of thealgorithmsclustering as a grouping tool that is most widely used in scientific and industrial applications because the basic idea of the kalgorithm is-means very simple. In this research, applying the technique of clustering using the k-means algorithm as a method of solving the problem of national development imbalances between regions in Central Java Province based on macroeconomic indicators. The data sample used is secondary data obtained from the Central Java Provincial Statistics Agency regarding macroeconomic indicator data which is part of the publication of the 2019 National Socio-Economic Survey (Susenas) data. score and determine the number of clusters (k) using the elbow method. After the clustering process is carried out, the validation is tested using themethodsBetween-Class Variation (BCV) and Within-Class Variation (WCV). The results showed that detection outlier using z-score normalization showed no outliers. In addition, the results of the clustering test obtained a ratio value that was not high, namely 0.011%. There are two district/city clusters in Central Java Province which have economic similarities based on the variables used, namely the first cluster with a high economic level consisting of 13 districts/cities and theclustersecondwith a low economic level consisting of 22 districts/cities. And in the cluster second, namely, between low economies, the authors grouped districts/cities based on similarities to macroeconomic indicators such as 20 districts of Gross Regional Domestic Product, with a Poverty Depth Index of 19 districts, with 5 districts in Human Development, and as many as Open Unemployment Rate. 10 districts.

Keywords: clustering, K-Means algorithm, macroeconomic indicators, inequality, national development

Procedia PDF Downloads 154
1944 Inverse Scattering for a Second-Order Discrete System via Transmission Eigenvalues

Authors: Abdon Choque-Rivero

Abstract:

The Jacobi system with the Dirichlet boundary condition is considered on a half-line lattice when the coefficients are real valued. The inverse problem of recovery of the coefficients from various data sets containing the so-called transmission eigenvalues is analyzed. The Marchenko method is utilized to solve the corresponding inverse problem.

Keywords: inverse scattering, discrete system, transmission eigenvalues, Marchenko method

Procedia PDF Downloads 140
1943 From Modeling of Data Structures towards Automatic Programs Generating

Authors: Valentin P. Velikov

Abstract:

Automatic program generation saves time, human resources, and allows receiving syntactically clear and logically correct modules. The 4-th generation programming languages are related to drawing the data and the processes of the subject area, as well as, to obtain a frame of the respective information system. The application can be separated in interface and business logic. That means, for an interactive generation of the needed system to be used an already existing toolkit or to be created a new one.

Keywords: computer science, graphical user interface, user dialog interface, dialog frames, data modeling, subject area modeling

Procedia PDF Downloads 298
1942 Constructing the Joint Mean-Variance Regions for Univariate and Bivariate Normal Distributions: Approach Based on the Measure of Cumulative Distribution Functions

Authors: Valerii Dashuk

Abstract:

The usage of the confidence intervals in economics and econometrics is widespread. To be able to investigate a random variable more thoroughly, joint tests are applied. One of such examples is joint mean-variance test. A new approach for testing such hypotheses and constructing confidence sets is introduced. Exploring both the value of the random variable and its deviation with the help of this technique allows checking simultaneously the shift and the probability of that shift (i.e., portfolio risks). Another application is based on the normal distribution, which is fully defined by mean and variance, therefore could be tested using the introduced approach. This method is based on the difference of probability density functions. The starting point is two sets of normal distribution parameters that should be compared (whether they may be considered as identical with given significance level). Then the absolute difference in probabilities at each 'point' of the domain of these distributions is calculated. This measure is transformed to a function of cumulative distribution functions and compared to the critical values. Critical values table was designed from the simulations. The approach was compared with the other techniques for the univariate case. It differs qualitatively and quantitatively in easiness of implementation, computation speed, accuracy of the critical region (theoretical vs. real significance level). Stable results when working with outliers and non-normal distributions, as well as scaling possibilities, are also strong sides of the method. The main advantage of this approach is the possibility to extend it to infinite-dimension case, which was not possible in the most of the previous works. At the moment expansion to 2-dimensional state is done and it allows to test jointly up to 5 parameters. Therefore the derived technique is equivalent to classic tests in standard situations but gives more efficient alternatives in nonstandard problems and on big amounts of data.

Keywords: confidence set, cumulative distribution function, hypotheses testing, normal distribution, probability density function

Procedia PDF Downloads 172
1941 National Assessment for Schools in Saudi Arabia: Score Reliability and Plausible Values

Authors: Dimiter M. Dimitrov, Abdullah Sadaawi

Abstract:

The National Assessment for Schools (NAFS) in Saudi Arabia consists of standardized tests in Mathematics, Reading, and Science for school grade levels 3, 6, and 9. One main goal is to classify students into four categories of NAFS performance (minimal, basic, proficient, and advanced) by schools and the entire national sample. The NAFS scoring and equating is performed on a bounded scale (D-scale: ranging from 0 to 1) in the framework of the recently developed “D-scoring method of measurement.” The specificity of the NAFS measurement framework and data complexity presented both challenges and opportunities to (a) the estimation of score reliability for schools, (b) setting cut-scores for the classification of students into categories of performance, and (c) generating plausible values for distributions of student performance on the D-scale. The estimation of score reliability at the school level was performed in the framework of generalizability theory (GT), with students “nested” within schools and test items “nested” within test forms. The GT design was executed via a multilevel modeling syntax code in R. Cut-scores (on the D-scale) for the classification of students into performance categories was derived via a recently developed method of standard setting, referred to as “Response Vector for Mastery” (RVM) method. For each school, the classification of students into categories of NAFS performance was based on distributions of plausible values for the students’ scores on NAFS tests by grade level (3, 6, and 9) and subject (Mathematics, Reading, and Science). Plausible values (on the D-scale) for each individual student were generated via random selection from a statistical logit-normal distribution with parameters derived from the student’s D-score and its conditional standard error, SE(D). All procedures related to D-scoring, equating, generating plausible values, and classification of students into performance levels were executed via a computer program in R developed for the purpose of NAFS data analysis.

Keywords: large-scale assessment, reliability, generalizability theory, plausible values

Procedia PDF Downloads 8
1940 Discrete-Time Bulk Queue with Service Capacity Depending on Previous Service Time

Authors: Yutae Lee

Abstract:

This paper considers a discrete-time bulk-arrival bulkservice queueing system, where service capacity varies depending on the previous service time. By using the generating function technique and the supplementary variable method, we compute the distributions of the queue length at an arbitrary slot boundary and a departure time.

Keywords: discrete-time queue, bulk queue, variable service capacity, queue length distribution

Procedia PDF Downloads 471
1939 Parallel Multisplitting Methods for Differential Systems

Authors: Malika El Kyal, Ahmed Machmoum

Abstract:

We prove the superlinear convergence of asynchronous multi-splitting methods applied to differential equations. This study is based on the technique of nested sets. It permits to specify kind of the convergence in the asynchronous mode.The main characteristic of an asynchronous mode is that the local algorithm not have to wait at predetermined messages to become available. We allow some processors to communicate more frequently than others, and we allow the communication delays to be substantial and unpredictable. Note that synchronous algorithms in the computer science sense are particular cases of our formulation of asynchronous one.

Keywords: parallel methods, asynchronous mode, multisplitting, ODE

Procedia PDF Downloads 517
1938 Video Summarization: Techniques and Applications

Authors: Zaynab El Khattabi, Youness Tabii, Abdelhamid Benkaddour

Abstract:

Nowadays, huge amount of multimedia repositories make the browsing, retrieval and delivery of video contents very slow and even difficult tasks. Video summarization has been proposed to improve faster browsing of large video collections and more efficient content indexing and access. In this paper, we focus on approaches to video summarization. The video summaries can be generated in many different forms. However, two fundamentals ways to generate summaries are static and dynamic. We present different techniques for each mode in the literature and describe some features used for generating video summaries. We conclude with perspective for further research.

Keywords: video summarization, static summarization, video skimming, semantic features

Procedia PDF Downloads 392
1937 Condition for Plasma Instability and Stability Approaches

Authors: Ratna Sen

Abstract:

As due to very high temperature of Plasma it is very difficult to confine it for sufficient time so that nuclear fusion reactions to take place, As we know Plasma escapes faster than the binary collision rates. We studied the ball analogy and the ‘energy principle’ and calculated the total potential energy for the whole Plasma. If δ ⃗w is negative, that is decrease in potential energy then the plasma will be unstable. We also discussed different approaches of stability analysis such as Nyquist Method, MHD approximation and Vlasov approach of plasma stability. So that by using magnetic field configurations we can able to create a stable Plasma in Tokamak for generating energy for future generations.

Keywords: jello, magnetic field configuration, MHD approximation, energy principle

Procedia PDF Downloads 433
1936 Young Children’s Use of Representations in Problem Solving

Authors: Kamariah Abu Bakar, Jennifer Way

Abstract:

This study investigated how young children (six years old) constructed and used representations in mathematics classroom; particularly in problem solving. The purpose of this study is to explore the ways children used representations in solving addition problems and to determine whether their representations can play a supportive role in understanding the problem situation and solving them correctly. Data collection includes observations, children’s artifact, photographs and conversation with children during task completion. The results revealed that children were able to construct and use various representations in solving problems. However, they have certain preferences in generating representations to support their problem solving.

Keywords: young children, representations, addition, problem solving

Procedia PDF Downloads 450
1935 NanoFrazor Lithography for advanced 2D and 3D Nanodevices

Authors: Zhengming Wu

Abstract:

NanoFrazor lithography systems were developed as a first true alternative or extension to standard mask-less nanolithography methods like electron beam lithography (EBL). In contrast to EBL they are based on thermal scanning probe lithography (t-SPL). Here a heatable ultra-sharp probe tip with an apex of a few nm is used for patterning and simultaneously inspecting complex nanostructures. The heat impact from the probe on a thermal responsive resist generates those high-resolution nanostructures. The patterning depth of each individual pixel can be controlled with better than 1 nm precision using an integrated in-situ metrology method. Furthermore, the inherent imaging capability of the Nanofrazor technology allows for markerless overlay, which has been achieved with sub-5 nm accuracy as well as it supports stitching layout sections together with < 10 nm error. Pattern transfer from such resist features below 10 nm resolution were demonstrated. The technology has proven its value as an enabler of new kinds of ultra-high resolution nanodevices as well as for improving the performance of existing device concepts. The application range for this new nanolithography technique is very broad spanning from ultra-high resolution 2D and 3D patterning to chemical and physical modification of matter at the nanoscale. Nanometer-precise markerless overlay and non-invasiveness to sensitive materials are among the key strengths of the technology. However, while patterning at below 10 nm resolution is achieved, significantly increasing the patterning speed at the expense of resolution is not feasible by using the heated tip alone. Towards this end, an integrated laser write head for direct laser sublimation (DLS) of the thermal resist has been introduced for significantly faster patterning of micrometer to millimeter-scale features. Remarkably, the areas patterned by the tip and the laser are seamlessly stitched together and both processes work on the very same resist material enabling a true mix-and-match process with no developing or any other processing steps in between. The presentation will include examples for (i) high-quality metal contacting of 2D materials, (ii) tuning photonic molecules, (iii) generating nanofluidic devices and (iv) generating spintronic circuits. Some of these applications have been enabled only due to the various unique capabilities of NanoFrazor lithography like the absence of damage from a charged particle beam.

Keywords: nanofabrication, grayscale lithography, 2D materials device, nano-optics, photonics, spintronic circuits

Procedia PDF Downloads 67
1934 A New Distribution and Application on the Lifetime Data

Authors: Gamze Ozel, Selen Cakmakyapan

Abstract:

We introduce a new model called the Marshall-Olkin Rayleigh distribution which extends the Rayleigh distribution using Marshall-Olkin transformation and has increasing and decreasing shapes for the hazard rate function. Various structural properties of the new distribution are derived including explicit expressions for the moments, generating and quantile function, some entropy measures, and order statistics are presented. The model parameters are estimated by the method of maximum likelihood and the observed information matrix is determined. The potentiality of the new model is illustrated by means of real life data set.

Keywords: Marshall-Olkin distribution, Rayleigh distribution, estimation, maximum likelihood

Procedia PDF Downloads 497
1933 A Geosynchronous Orbit Synthetic Aperture Radar Simulator for Moving Ship Targets

Authors: Linjie Zhang, Baifen Ren, Xi Zhang, Genwang Liu

Abstract:

Ship detection is of great significance for both military and civilian applications. Synthetic aperture radar (SAR) with all-day, all-weather, ultra-long-range characteristics, has been used widely. In view of the low time resolution of low orbit SAR and the needs for high time resolution SAR data, GEO (Geosynchronous orbit) SAR is getting more and more attention. Since GEO SAR has short revisiting period and large coverage area, it is expected to be well utilized in marine ship targets monitoring. However, the height of the orbit increases the time of integration by almost two orders of magnitude. For moving marine vessels, the utility and efficacy of GEO SAR are still not sure. This paper attempts to find the feasibility of GEO SAR by giving a GEO SAR simulator of moving ships. This presented GEO SAR simulator is a kind of geometrical-based radar imaging simulator, which focus on geometrical quality rather than high radiometric. Inputs of this simulator are 3D ship model (.obj format, produced by most 3D design software, such as 3D Max), ship's velocity, and the parameters of satellite orbit and SAR platform. Its outputs are simulated GEO SAR raw signal data and SAR image. This simulating process is accomplished by the following four steps. (1) Reading 3D model, including the ship rotations (pitch, yaw, and roll) and velocity (speed and direction) parameters, extract information of those little primitives (triangles) which is visible from the SAR platform. (2) Computing the radar scattering from the ship with physical optics (PO) method. In this step, the vessel is sliced into many little rectangles primitives along the azimuth. The radiometric calculation of each primitive is carried out separately. Since this simulator only focuses on the complex structure of ships, only single-bounce reflection and double-bounce reflection are considered. (3) Generating the raw data with GEO SAR signal modeling. Since the normal ‘stop and go’ model is not available for GEO SAR, the range model should be reconsidered. (4) At last, generating GEO SAR image with improved Range Doppler method. Numerical simulation of fishing boat and cargo ship will be given. GEO SAR images of different posture, velocity, satellite orbit, and SAR platform will be simulated. By analyzing these simulated results, the effectiveness of GEO SAR for the detection of marine moving vessels is evaluated.

Keywords: GEO SAR, radar, simulation, ship

Procedia PDF Downloads 169
1932 Downscaling Daily Temperature with Neuroevolutionary Algorithm

Authors: Min Shi

Abstract:

State of the art research with Artificial Neural Networks for the downscaling of General Circulation Models (GCMs) mainly uses back-propagation algorithm as a training approach. This paper introduces another training approach of ANNs, Evolutionary Algorithm. The combined algorithm names neuroevolutionary (NE) algorithm. We investigate and evaluate the use of the NE algorithms in statistical downscaling by generating temperature estimates at interior points given information from a lattice of surrounding locations. The results of our experiments indicate that NE algorithms can be efficient alternative downscaling methods for daily temperatures.

Keywords: temperature, downscaling, artificial neural networks, evolutionary algorithms

Procedia PDF Downloads 343
1931 A Very Efficient Pseudo-Random Number Generator Based On Chaotic Maps and S-Box Tables

Authors: M. Hamdi, R. Rhouma, S. Belghith

Abstract:

Generating random numbers are mainly used to create secret keys or random sequences. It can be carried out by various techniques. In this paper we present a very simple and efficient pseudo-random number generator (PRNG) based on chaotic maps and S-Box tables. This technique adopted two main operations one to generate chaotic values using two logistic maps and the second to transform them into binary words using random S-Box tables. The simulation analysis indicates that our PRNG possessing excellent statistical and cryptographic properties.

Keywords: Random Numbers, Chaotic map, S-box, cryptography, statistical tests

Procedia PDF Downloads 359