Search results for: form function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10287

Search results for: form function

10047 The Use of Gender-Fair Language in CS National Exams

Authors: Moshe Leiba, Doron Zohar

Abstract:

Computer Science (CS) and programming is still considered a boy’s club and is a male-dominated profession. This is also the case in high schools and higher education. In Israel, not different from the rest of the world, there are less than 35% of female students in CS studies that take the matriculation exams. The Israeli matriculation exams are written in a masculine form language. Gender-fair language (GFL) aims at reducing gender stereotyping and discrimination. There are several strategies that can be employed to make languages gender-fair and to treat women and men symmetrically (especially in languages with grammatical gender, among them neutralization and using the plural form. This research aims at exploring computer science teachers’ beliefs regarding the use of gender-fair language in exams. An exploratory quantitative research methodology was employed to collect the data. A questionnaire was administered to 353 computer science teachers. 58% female and 42% male. 86% are teaching for at least 3 years, with 59% of them have a teaching experience of 7 years. 71% of the teachers teach in high school, and 82% of them are preparing students for the matriculation exam in computer science. The questionnaire contained 2 matriculation exam questions from previous years and open-ended questions. Teachers were asked which form they think is more suited: (a) the existing form (mescaline), (b) using both gender full forms (e.g., he/she), (c) using both gender short forms, (d) plural form, (e) natural form, and (f) female form. 84% of the teachers recognized the need to change the existing mescaline form in the matriculation exams. About 50% of them thought that using the plural form was the best-suited option. When examining the teachers who are pro-change and those who are against, no gender differences or teaching experience were found. The teachers who are pro gender-fair language justified it as making it more personal and motivating for the female students. Those who thought that the mescaline form should remain argued that the female students do not complain and the change in form will not influence or affect the female students to choose to study computer science. Some even argued that the change will not affect the students but can only improve their sense of identity or feeling toward the profession (which seems like a misconception). This research suggests that the teachers are pro-change and believe that re-formulating the matriculation exams is the right step towards encouraging more female students to choose to study computer science as their major study track and to bridge the gap for gender equality. This should indicate a bottom-up approach, as not long after this research was conducted, the Israeli ministry of education decided to change the matriculation exams to gender-fair language using the plural form. In the coming years, with the transition to web-based examination, it is suggested to use personalization and adjust the language form in accordance with the student's gender.

Keywords: compter science, gender-fair language, teachers, national exams

Procedia PDF Downloads 85
10046 Efficient Subgoal Discovery for Hierarchical Reinforcement Learning Using Local Computations

Authors: Adrian Millea

Abstract:

In hierarchical reinforcement learning, one of the main issues encountered is the discovery of subgoal states or options (which are policies reaching subgoal states) by partitioning the environment in a meaningful way. This partitioning usually requires an expensive global clustering operation or eigendecomposition of the Laplacian of the states graph. We propose a local solution to this issue, much more efficient than algorithms using global information, which successfully discovers subgoal states by computing a simple function, which we call heterogeneity for each state as a function of its neighbors. Moreover, we construct a value function using the difference in heterogeneity from one step to the next, as reward, such that we are able to explore the state space much more efficiently than say epsilon-greedy. The same principle can then be applied to higher level of the hierarchy, where now states are subgoals discovered at the level below.

Keywords: exploration, hierarchical reinforcement learning, locality, options, value functions

Procedia PDF Downloads 136
10045 Rational Bureaucracy and E-Government: A Philosophical Study of Universality of E-Government

Authors: Akbar Jamali

Abstract:

Hegel is the first great political philosopher who specifically contemplates on bureaucracy. For Hegel bureaucracy is the function of the state. Since state, essentially is a rational organization, its function; namely, bureaucracy must be rational. Since, what is rational is universal; Hegel had to explain how the bureaucracy could be understood as universal. Hegel discusses bureaucracy in his treatment of ‘executive power’. He analyses modern bureaucracy as a form of political organization, its constituent members, and its relation to the social environment. Therefore, the essence of bureaucracy in Hegel’s philosophy is the implementation of law and rules. Hegel argues that unlike the other social classes that are particular because they look for their own private interest, bureaucracy as a class is a ‘universal’ because their orientation is the interest of the state. State for Hegel is essentially rational and universal. It is the actualization of ‘objective Spirit’. Marx criticizes Hegel’s argument on the universality of state and bureaucracy. For Marx state is equal to bureaucracy, it constitutes a social class that based on the interest of bourgeois class that dominates the society and exploits proletarian class. Therefore, the main disagreement between these political philosophers is: whether the state (bureaucracy) is universal or particular. Growing e-government in modern state as an important aspect of development leads us to contemplate on the particularity and universality of e-government. In this article, we will argue that e-government essentially is universal. E-government, in itself, is impartial; therefore, it cannot be particular. The development of e-government eliminates many side effects of the private, personal or particular interest of the individuals who work as bureaucracy. Finally, we will argue that more a state is developed more it is universal. Therefore, development of e-government makes the state a more universal and affects the modern philosophical debate on the particularity or universality of bureaucracy and state.

Keywords: particularity, universality, rational bureaucracy, impartiality

Procedia PDF Downloads 215
10044 Practical Challenges of Tunable Parameters in Matlab/Simulink Code Generation

Authors: Ebrahim Shayesteh, Nikolaos Styliaras, Alin George Raducu, Ozan Sahin, Daniel Pombo VáZquez, Jonas Funkquist, Sotirios Thanopoulos

Abstract:

One of the important requirements in many code generation projects is defining some of the model parameters tunable. This helps to update the model parameters without performing the code generation again. This paper studies the concept of embedded code generation by MATLAB/Simulink coder targeting the TwinCAT Simulink system. The generated runtime modules are then tested and deployed to the TwinCAT 3 engineering environment. However, defining the parameters tunable in MATLAB/Simulink code generation targeting TwinCAT is not very straightforward. This paper focuses on this subject and reviews some of the techniques tested here to make the parameters tunable in generated runtime modules. Three techniques are proposed for this purpose, including normal tunable parameters, callback functions, and mask subsystems. Moreover, some test Simulink models are developed and used to evaluate the results of proposed approaches. A brief summary of the study results is presented in the following. First of all, the parameters defined tunable and used in defining the values of other Simulink elements (e.g., gain value of a gain block) could be changed after the code generation and this value updating will affect the values of all elements defined based on the values of the tunable parameter. For instance, if parameter K=1 is defined as a tunable parameter in the code generation process and this parameter is used to gain a gain block in Simulink, the gain value for the gain block is equal to 1 in the gain block TwinCAT environment after the code generation. But, the value of K can be changed to a new value (e.g., K=2) in TwinCAT (without doing any new code generation in MATLAB). Then, the gain value of the gain block will change to 2. Secondly, adding a callback function in the form of “pre-load function,” “post-load function,” “start function,” and will not help to make the parameters tunable without performing a new code generation. This means that any MATLAB files should be run before performing the code generation. The parameters defined/calculated in this file will be used as fixed values in the generated code. Thus, adding these files as callback functions to the Simulink model will not make these parameters flexible since the MATLAB files will not be attached to the generated code. Therefore, to change the parameters defined/calculated in these files, the code generation should be done again. However, adding these files as callback functions forces MATLAB to run them before the code generation, and there is no need to define the parameters mentioned in these files separately. Finally, using a tunable parameter in defining/calculating the values of other parameters through the mask is an efficient method to change the value of the latter parameters after the code generation. For instance, if tunable parameter K is used in calculating the value of two other parameters K1 and K2 and, after the code generation, the value of K is updated in TwinCAT environment, the value of parameters K1 and K2 will also be updated (without any new code generation).

Keywords: code generation, MATLAB, tunable parameters, TwinCAT

Procedia PDF Downloads 199
10043 The Finite Element Method for Nonlinear Fredholm Integral Equation of the Second Kind

Authors: Melusi Khumalo, Anastacia Dlamini

Abstract:

In this paper, we consider a numerical solution for nonlinear Fredholm integral equations of the second kind. We work with uniform mesh and use the Lagrange polynomials together with the Galerkin finite element method, where the weight function is chosen in such a way that it takes the form of the approximate solution but with arbitrary coefficients. We implement the finite element method to the nonlinear Fredholm integral equations of the second kind. We consider the error analysis of the method. Furthermore, we look at a specific example to illustrate the implementation of the finite element method.

Keywords: finite element method, Galerkin approach, Fredholm integral equations, nonlinear integral equations

Procedia PDF Downloads 340
10042 Bayesian Optimization for Reaction Parameter Tuning: An Exploratory Study of Parameter Optimization in Oxidative Desulfurization of Thiophene

Authors: Aman Sharma, Sonali Sengupta

Abstract:

The study explores the utility of Bayesian optimization in tuning the physical and chemical parameters of reactions in an offline experimental setup. A comparative analysis of the influence of the acquisition function on the optimization performance is also studied. For proxy first and second-order reactions, the results are indifferent to the acquisition function used, whereas, while studying the parameters for oxidative desulphurization of thiophene in an offline setup, upper confidence bound (UCB) provides faster convergence along with a marginal trade-off in the maximum conversion achieved. The work also demarcates the critical number of independent parameters and input observations required for both sequential and offline reaction setups to yield tangible results.

Keywords: acquisition function, Bayesian optimization, desulfurization, kinetics, thiophene

Procedia PDF Downloads 149
10041 Analytical Design of Fractional-Order PI Controller for Decoupling Control System

Authors: Truong Nguyen Luan Vu, Le Hieu Giang, Le Linh

Abstract:

The FOPI controller is proposed based on the main properties of the decoupling control scheme, as well as the fractional calculus. By using the simplified decoupling technique, the transfer function of decoupled apparent process is firstly separated into a set of n equivalent independent processes in terms of a ratio of the diagonal elements of original open-loop transfer function to those of dynamic relative gain array and the fraction – order PI controller is then developed for each control loops due to the Bode’s ideal transfer function that gives the desired fractional closed-loop response in the frequency domain. The simulation studies were carried out to evaluate the proposed design approach in a fair compared with the other existing methods in accordance with the structured singular value (SSV) theory that used to measure the robust stability of control systems under multiplicative output uncertainty. The simulation results indicate that the proposed method consistently performs well with fast and well-balanced closed-loop time responses.

Keywords: ideal transfer function of bode, fractional calculus, fractional order proportional integral (FOPI) controller, decoupling control system

Procedia PDF Downloads 299
10040 Combined Odd Pair Autoregressive Coefficients for Epileptic EEG Signals Classification by Radial Basis Function Neural Network

Authors: Boukari Nassim

Abstract:

This paper describes the use of odd pair autoregressive coefficients (Yule _Walker and Burg) for the feature extraction of electroencephalogram (EEG) signals. In the classification: the radial basis function neural network neural network (RBFNN) is employed. The RBFNN is described by his architecture and his characteristics: as the RBF is defined by the spread which is modified for improving the results of the classification. Five types of EEG signals are defined for this work: Set A, Set B for normal signals, Set C, Set D for interictal signals, set E for ictal signal (we can found that in Bonn university). In outputs, two classes are given (AC, AD, AE, BC, BD, BE, CE, DE), the best accuracy is calculated at 99% for the combined odd pair autoregressive coefficients. Our method is very effective for the diagnosis of epileptic EEG signals.

Keywords: epilepsy, EEG signals classification, combined odd pair autoregressive coefficients, radial basis function neural network

Procedia PDF Downloads 321
10039 Improved Safety Science: Utilizing a Design Hierarchy

Authors: Ulrica Pettersson

Abstract:

Collection of information on incidents is regularly done through pre-printed incident report forms. These tend to be incomplete and frequently lack essential information. ne consequence is that reports with inadequate information, that do not fulfil analysts’ requirements, are transferred into the analysis process. To improve an incident reporting form, theory in design science, witness psychology and interview and questionnaire research has been used. Previously three experiments have been conducted to evaluate the form and shown significant improved results. The form has proved to capture knowledge, regardless of the incidents’ character or context. The aim in this paper is to describe how design science, in more detail a design hierarchy can be used to construct a collection form for improvements in safety science.

Keywords: data collection, design science, incident reports, safety science

Procedia PDF Downloads 182
10038 Mitochondrial DNA Defect and Mitochondrial Dysfunction in Diabetic Nephropathy: The Role of Hyperglycemia-Induced Reactive Oxygen Species

Authors: Ghada Al-Kafaji, Mohamed Sabry

Abstract:

Mitochondria are the site of cellular respiration and produce energy in the form of adenosine triphosphate (ATP) via oxidative phosphorylation. They are the major source of intracellular reactive oxygen species (ROS) and are also direct target to ROS attack. Oxidative stress and ROS-mediated disruptions of mitochondrial function are major components involved in the pathogenicity of diabetic complications. In this work, the changes in mitochondrial DNA (mtDNA) copy number, biogenesis, gene expression of mtDNA-encoded subunits of electron transport chain (ETC) complexes, and mitochondrial function in response to hyperglycemia-induced ROS and the effect of direct inhibition of ROS on mitochondria were investigated in an in vitro model of diabetic nephropathy using human renal mesangial cells. The cells were exposed to normoglycemic and hyperglycemic conditions in the presence and absence of Mn(III)tetrakis(4-benzoic acid) porphyrin chloride (MnTBAP) or catalase for 1, 4 and 7 days. ROS production was assessed by the confocal microscope and flow cytometry. mtDNA copy number and PGC-1a, NRF-1, and TFAM, as well as ND2, CYTB, COI, and ATPase 6 transcripts, were all analyzed by real-time PCR. PGC-1a, NRF-1, and TFAM, as well as ND2, CYTB, COI, and ATPase 6 proteins, were analyzed by Western blotting. Mitochondrial function was determined by assessing mitochondrial membrane potential and adenosine triphosphate (ATP) levels. Hyperglycemia-induced a significant increase in the production of mitochondrial superoxide and hydrogen peroxide at day 1 (P < 0.05), and this increase remained significantly elevated at days 4 and 7 (P < 0.05). The copy number of mtDNA and expression of PGC-1a, NRF-1, and TFAM as well as ND2, CYTB, CO1 and ATPase 6 increased after one day of hyperglycemia (P < 0.05), with a significant reduction in all those parameters at 4 and 7 days (P < 0.05). The mitochondrial membrane potential decreased progressively at 1 to 7 days of hyperglycemia with the parallel progressive reduction in ATP levels over time (P < 0.05). MnTBAP and catalase treatment of cells cultured under hyperglycemic conditions attenuated ROS production reversed renal mitochondrial oxidative stress and improved mtDNA, mitochondrial biogenesis, and function. These results show that hyperglycemia-induced ROS caused an early increase in mtDNA copy number, mitochondrial biogenesis and mtDNA-encoded gene expression of the ETC subunits in human mesangial cells as a compensatory response to the decline in mitochondrial function, which precede the mtDNA defect and mitochondrial dysfunction with a progressive oxidative response. Protection from ROS-mediated damage to renal mitochondria induced by hyperglycemia may be a novel therapeutic approach for the prevention/treatment of DN.

Keywords: diabetic nephropathy, hyperglycemia, reactive oxygen species, oxidative stress, mtDNA, mitochondrial dysfunction, manganese superoxide dismutase, catalase

Procedia PDF Downloads 223
10037 The Reach of Shopping Center Layout Form on Subway Based on Kernel Density Estimate

Authors: Wen Liu

Abstract:

With the rapid progress of modern cities, the railway construction must be developing quickly in China. As a typical high-density country, shopping center on the subway should be one important factor during the process of urban development. The paper discusses the influence of the layout of shopping center on the subway, and put it in the time and space’s axis of Shanghai urban development. We use the digital technology to establish the database of relevant information. And then get the change role about shopping center on subway in Shanghaiby the Kernel density estimate. The result shows the development of shopping center on subway has a relationship with local economic strength, population size, policy support, and city construction. And the suburbanization trend of shopping center would be increasingly significant. By this case research, we could see the Kernel density estimate is an efficient analysis method on the spatial layout. It could reveal the characters of layout form of shopping center on subway in essence. And it can also be applied to the other research of space form.

Keywords: Shanghai, shopping center on the subway, layout form, Kernel density estimate

Procedia PDF Downloads 282
10036 Functions of Public Policy in Private International Law

Authors: Fedorova Elena

Abstract:

In this article, we draw a distinction between two important functions of public policy in private international law. The first function is widely recognized and relates to the prevention of application of foreign laws and enforcement of foreign court judgments whenever their effects are incompatible with the domestic legal system of the forum. This effectively protects sovereign rights of the forum state as it allows to resist against the undesirable effects of foreign law-making and law-enforcement policies. The second function is less obvious, but not less important. As the internal private legal relationships, international private relationships are usually governed by rules of public policy, to which the parties can not derogate by mutual agreement. Thefore, for international private law relations public policy has a different function than previously mentioned: in this case, the public policy acts as a defense against unacceptable effects of the party autonomy. Thus, this second function of public policy consists in the limitation of the party autonomy wich effects would be unacceptable for the local legal system. In the frame of this second function the author will analyse two types of public policy which can limit the party autonomy: « substantial » public policy (which regulates the substance of international legal relationship) and « conflictual » public policy (which regulates the party autonomy to choose the law applicable for the substance of relationship). The author provides an analysis of these functions of the public policy in the field of international contract law because of the important role of the principle of party autonomy for international contract relations.

Keywords: public policy, general theory of private international law, substantial public policy, conflictual public policy

Procedia PDF Downloads 546
10035 Effect of Degree of Phosphorylation on Electrospinning and In vitro Cell Behavior of Phosphorylated Polymers as Biomimetic Materials for Tissue Engineering Applications

Authors: Pallab Datta, Jyotirmoy Chatterjee, Santanu Dhara

Abstract:

Over the past few years, phosphorous containing polymers have received widespread attention for applications such as high performance optical fibers, flame retardant materials, drug delivery and tissue engineering. Being pentavalent, phosphorous can exist in different chemical environments in these polymers which increase their versatility. In human biochemistry, phosphorous based compounds exert their functions both in soluble and insoluble form occurring as inorganic or as organophosphorous compounds. Specifically in case of biomacromolecules, phosphates are critical for functions of DNA, ATP, phosphoproteins, phospholipids, phosphoglycans and several coenzymes. Inspired by the role of phosphorous in functional biomacromolecules, design and synthesis of biomimetic materials are thus carried out by several authors to study macromolecular function or as substitutes in clinical tissue regeneration conditions. In addition, many regulatory signals of the body are controlled by phoshphorylation of key proteins present either in form of growth factors or matrix-bound scaffold proteins. This inspires works on synthesis of phospho-peptidomimetic amino acids for understanding key signaling pathways and this is extended to obtain molecules with potentially useful biological properties. Apart from above applications, phosphate groups bound to polymer backbones have also been demonstrated to improve function of osteoblast cells and augment performance of bone grafts. Despite the advantages of phosphate grafting, however, there is limited understanding on effect of degree of phosphorylation on macromolecular physicochemical and/or biological properties. Such investigations are necessary to effectively translate knowledge of macromolecular biochemistry into relevant clinical products since they directly influence processability of these polymers into suitable scaffold structures and control subsequent biological response. Amongst various techniques for fabrication of biomimetic scaffolds, nanofibrous scaffolds fabricated by electrospinning technique offer some special advantages in resembling the attributes of natural extracellular matrix. Understanding changes in physico-chemical properties of polymers as function of phosphorylation is therefore going to be crucial in development of nanofiber scaffolds based on phosphorylated polymers. The aim of the present work is to investigate the effect of phosphorous grafting on the electrospinning behavior of polymers with aim to obtain biomaterials for bone regeneration applications. For this purpose, phosphorylated derivatives of two polymers of widely different electrospinning behaviors were selected as starting materials. Poly(vinyl alcohol) is a conveniently electrospinnable polymer at different conditions and concentrations. On the other hand, electrospinning of chitosan backbone based polymers have been viewed as a critical challenge. The phosphorylated derivatives of these polymers were synthesized, characterized and electrospinning behavior of various solutions containing these derivatives was compared with electrospinning of pure poly (vinyl alcohol). In PVA, phosphorylation adversely impacted electrospinnability while in NMPC, higher phosphate content widened concentration range for nanofiber formation. Culture of MG-63 cells on electrospun nanofibers, revealed that degree of phosphate modification of a polymer significantly improves cell adhesion or osteoblast function of cultured cells. It is concluded that improvement of cell response parameters of nanofiber scaffolds can be attained as a function of controlled degree of phosphate grafting in polymeric biomaterials with implications for bone tissue engineering applications.

Keywords: bone regeneration, chitosan, electrospinning, phosphorylation

Procedia PDF Downloads 195
10034 A Perspective on Teaching Mathematical Concepts to Freshman Economics Students Using 3D-Visualisations

Authors: Muhammad Saqib Manzoor, Camille Dickson-Deane, Prashan Karunaratne

Abstract:

Cobb-Douglas production (utility) function is a fundamental function widely used in economics teaching and research. The key reason is the function's characteristics to describe the actual production using inputs like labour and capital. The characteristics of the function like returns to scale, marginal, and diminishing marginal productivities are covered in the introductory units in both microeconomics and macroeconomics with a 2-dimensional static visualisation of the function. However, less insight is provided regarding three-dimensional surface, changes in the curvature properties due to returns to scale, the linkage of the short-run production function with its long-run counterpart and marginal productivities, the level curves, and the constraint optimisation. Since (freshman) learners have diverse prior knowledge and cognitive skills, the existing “one size fits all” approach is not very helpful. The aim of this study is to bridge this gap by introducing technological intervention with interactive animations of the three-dimensional surface and sequential unveiling of the characteristics mentioned above using Python software. A small classroom intervention has helped students enhance their analytical and visualisation skills towards active and authentic learning of this topic. However, to authenticate the strength of our approach, a quasi-Delphi study will be conducted to ask domain-specific experts, “What value to the learning process in economics is there using a 2-dimensional static visualisation compared to using a 3-dimensional dynamic visualisation?’ Here three perspectives of the intervention were reviewed by a panel comprising of novice students, experienced students, novice instructors, and experienced instructors in an effort to determine the learnings from each type of visualisations within a specific domain of knowledge. The value of this approach is key to suggesting different pedagogical methods which can enhance learning outcomes.

Keywords: cobb-douglas production function, quasi-Delphi method, effective teaching and learning, 3D-visualisations

Procedia PDF Downloads 115
10033 Optimal Mother Wavelet Function for Shoulder Muscles of Upper Limb Amputees

Authors: Amanpreet Kaur

Abstract:

Wavelet transform (WT) is a powerful statistical tool used in applied mathematics for signal and image processing. The different mother, wavelet basis function, has been compared to select the optimal wavelet function that represents the electromyogram signal characteristics of upper limb amputees. Four different EMG electrode has placed on different location of shoulder muscles. Twenty one wavelet functions from different wavelet families were investigated. These functions included Daubechies (db1-db10), Symlets (sym1-sym5), Coiflets (coif1-coif5) and Discrete Meyer. Using mean square error value, the significance of the mother wavelet functions has been determined for teres, pectorals, and infraspinatus around shoulder muscles. The results show that the best mother wavelet is the db3 from the Daubechies family for efficient classification of the signal.

Keywords: Daubechies, upper limb amputation, shoulder muscles, Symlets, Coiflets

Procedia PDF Downloads 214
10032 Monte Carlo Estimation of Heteroscedasticity and Periodicity Effects in a Panel Data Regression Model

Authors: Nureni O. Adeboye, Dawud A. Agunbiade

Abstract:

This research attempts to investigate the effects of heteroscedasticity and periodicity in a Panel Data Regression Model (PDRM) by extending previous works on balanced panel data estimation within the context of fitting PDRM for Banks audit fee. The estimation of such model was achieved through the derivation of Joint Lagrange Multiplier (LM) test for homoscedasticity and zero-serial correlation, a conditional LM test for zero serial correlation given heteroscedasticity of varying degrees as well as conditional LM test for homoscedasticity given first order positive serial correlation via a two-way error component model. Monte Carlo simulations were carried out for 81 different variations, of which its design assumed a uniform distribution under a linear heteroscedasticity function. Each of the variation was iterated 1000 times and the assessment of the three estimators considered are based on Variance, Absolute bias (ABIAS), Mean square error (MSE) and the Root Mean Square (RMSE) of parameters estimates. Eighteen different models at different specified conditions were fitted, and the best-fitted model is that of within estimator when heteroscedasticity is severe at either zero or positive serial correlation value. LM test results showed that the tests have good size and power as all the three tests are significant at 5% for the specified linear form of heteroscedasticity function which established the facts that Banks operations are severely heteroscedastic in nature with little or no periodicity effects.

Keywords: audit fee lagrange multiplier test, heteroscedasticity, lagrange multiplier test, Monte-Carlo scheme, periodicity

Procedia PDF Downloads 114
10031 Cubical Representation of Prime and Essential Prime Implicants of Boolean Functions

Authors: Saurabh Rawat, Anushree Sah

Abstract:

K Maps are generally and ideally, thought to be simplest form for obtaining solution of Boolean equations. Cubical Representation of Boolean equations is an alternate pick to incur a solution, otherwise to be meted out with Truth Tables, Boolean Laws, and different traits of Karnaugh Maps. Largest possible k- cubes that exist for a given function are equivalent to its prime implicants. A technique of minimization of Logic functions is tried to be achieved through cubical methods. The main purpose is to make aware and utilise the advantages of cubical techniques in minimization of Logic functions. All this is done with an aim to achieve minimal cost solution.r

Keywords: K-maps, don’t care conditions, Boolean equations, cubes

Procedia PDF Downloads 363
10030 Borate Crosslinked Fracturing Fluids: Laboratory Determination of Rheology

Authors: Lalnuntluanga Hmar, Hardik Vyas

Abstract:

Hydraulic fracturing has become an essential procedure to break apart the rock and release the oil or gas which are trapped tightly in the rock by pumping fracturing fluids at high pressure down into the well. To open the fracture and to transport propping agent along the fracture, proper selection of fracturing fluids is the most crucial components in fracturing operations. Rheology properties of the fluids are usually considered the most important. Among various fracturing fluids, Borate crosslinked fluids have proved to be highly effective. Borate in the form of Boric Acid, borate ion is the most commonly use to crosslink the hydrated polymers and to produce very viscous gels that can stable at high temperature. Guar and HPG (Hydroxypropyl Guar) polymers are the most often used in these fluids. Borate gel rheology is known to be a function of polymer concentration, borate ion concentration, pH, and temperature. The crosslinking using Borate is a function of pH which means it can be formed or reversed simply by altering the pH of the fluid system. The fluid system was prepared by mixing base polymer with water at pH ranging between 8 to 11 and the optimum borate crosslinker efficiency was found to be pH of about 10. The rheology of laboratory prepared Borate crosslinked fracturing fluid was determined using Anton Paar Rheometer and Fann Viscometer. The viscosity was measured at high temperature ranging from 200ᵒF to 250ᵒF and pressures in order to partially stimulate the downhole condition. Rheological measurements reported that the crosslinking increases the viscosity, elasticity and thus fluid capability to transport propping agent.

Keywords: borate, crosslinker, Guar, Hydroxypropyl Guar (HPG), rheology

Procedia PDF Downloads 179
10029 Investigating Students' Understanding about Mathematical Concept through Concept Map

Authors: Rizky Oktaviana

Abstract:

The main purpose of studying lies in improving students’ understanding. Teachers usually use written test to measure students’ understanding about learning material especially mathematical learning material. This common method actually has a lack point, such that in mathematics content, written test only show procedural steps to solve mathematical problems. Therefore, teachers unable to see whether students actually understand about mathematical concepts and the relation between concepts or not. One of the best tools to observe students’ understanding about the mathematical concepts is concept map. The goal of this research is to describe junior high school students understanding about mathematical concepts through Concept Maps based on the difference of mathematical ability. There were three steps in this research; the first step was choosing the research subjects by giving mathematical ability test to students. The subjects of this research are three students with difference mathematical ability, high, intermediate and low mathematical ability. The second step was giving concept mapping training to the chosen subjects. The last step was giving concept mapping task about the function to the subjects. Nodes which are the representation of concepts of function were provided in concept mapping task. The subjects had to use the nodes in concept mapping. Based on data analysis, the result of this research shows that subject with high mathematical ability has formal understanding, due to that subject could see the connection between concepts of function and arranged the concepts become concept map with valid hierarchy. Subject with intermediate mathematical ability has relational understanding, because subject could arranged all the given concepts and gave appropriate label between concepts though it did not represent the connection specifically yet. Whereas subject with low mathematical ability has poor understanding about function, it can be seen from the concept map which is only used few of the given concepts because subject could not see the connection between concepts. All subjects have instrumental understanding for the relation between linear function concept, quadratic function concept and domain, co domain, range.

Keywords: concept map, concept mapping, mathematical concepts, understanding

Procedia PDF Downloads 245
10028 Role of Environmental Focus in Legal Protection and Efficient Management of Wetlands in the Republic of Kazakhstan

Authors: K. R. Balabiyev, A. O. Kaipbayeva

Abstract:

The article discusses the legal framework of the government’s environmental function and analyzes the role of the national policy in protection of wetlands. The problem is of interest for it deals with the most important branch of economy–utilization of Kazakhstan’s natural resources, protection of health and environmental well being of the population. Development of a long-term environmental program addressing the protection of wetlands represents the final stage of the government’s environmental policy, and is a relatively new function for the public administration system. It appeared due to the environmental measures that require immediate decisions to be taken. It is an integral part of the effort in the field of management of state-owned natural resource, as well as of the measures aimed at efficient management of natural resources to avoid their early depletion or contamination.

Keywords: environmental focus, government’s environmental function, protection of wetlands, Kazakhstan

Procedia PDF Downloads 305
10027 Meta Model for Optimum Design Objective Function of Steel Frames Subjected to Seismic Loads

Authors: Salah R. Al Zaidee, Ali S. Mahdi

Abstract:

Except for simple problems of statically determinate structures, optimum design problems in structural engineering have implicit objective functions where structural analysis and design are essential within each searching loop. With these implicit functions, the structural engineer is usually enforced to write his/her own computer code for analysis, design, and searching for optimum design among many feasible candidates and cannot take advantage of available software for structural analysis, design, and searching for the optimum solution. The meta-model is a regression model used to transform an implicit objective function into objective one and leads in turn to decouple the structural analysis and design processes from the optimum searching process. With the meta-model, well-known software for structural analysis and design can be used in sequence with optimum searching software. In this paper, the meta-model has been used to develop an explicit objective function for plane steel frames subjected to dead, live, and seismic forces. Frame topology is assumed as predefined based on architectural and functional requirements. Columns and beams sections and different connections details are the main design variables in this study. Columns and beams are grouped to reduce the number of design variables and to make the problem similar to that adopted in engineering practice. Data for the implicit objective function have been generated based on analysis and assessment for many design proposals with CSI SAP software. These data have been used later in SPSS software to develop a pure quadratic nonlinear regression model for the explicit objective function. Good correlations with a coefficient, R2, in the range from 0.88 to 0.99 have been noted between the original implicit functions and the corresponding explicit functions generated with meta-model.

Keywords: meta-modal, objective function, steel frames, seismic analysis, design

Procedia PDF Downloads 217
10026 Effect of Acceptance and Commitment Therapy in Cognitive Function among Breast Cancer Patients in Eastern Country

Authors: Arunima Datta, Prathama Guha Chaudhuri, Ashis Mukhopadhyay

Abstract:

Background: Acceptance and commitment therapy (ACT) is one of the newer forms (third wave) therapy. This therapy helps a cancer patient to increase acceptance level about their disease as well as their present situation. Breast cancer patients are known to suffer from depression and mild cognitive impairment; both affect their quality of life. Objectives:The present study had assessed effect of structured ACT intervention on cognitive function and acceptance level among breast cancer patients who were undergoing chemotherapy. Method: Data was collected from 123 breast cancer patients those who were undergoing chemotherapy were willing to undergo psychological treatment, with no history of past psychiatric illness. Their baseline of cognitive function and acceptance levels were assessed using validated tools. The effect of sociodemographic factors and clinical factors on cognitive function was determined at baseline.The participants were randomly divided into two groups: experimental (ACT, 4 sessions over 2 months) and control group. Cognitive function and acceptance level were measured during post intervention on 2months follow-up. Appropriate statistical analyses were performed to determine the effect on cognitive function and acceptance level in two groups. Result: At baseline, the factors that significantly influenced slower speed of task performance were ER PR HER2 status; number of chemo cycle, treatment type (Adjuvant and neo-adjuvant) was related with that. Sociodemographic characteristics did not show any significant difference between slow and fast performance. Per and post intervention analysis showed that ACT intervention resulted in significant difference both in terms of speed of cognitive performance and acceptance level. Conclusion: ACT is an effective therapeutic option for treating mild cognitive impairment and improve acceptance level among breast cancer patients undergoing chemotherapy.

Keywords: acceptance and commitment therapy, breast cancer, quality of life, cognitive function

Procedia PDF Downloads 278
10025 Efficient Layout-Aware Pretraining for Multimodal Form Understanding

Authors: Armineh Nourbakhsh, Sameena Shah, Carolyn Rose

Abstract:

Layout-aware language models have been used to create multimodal representations for documents that are in image form, achieving relatively high accuracy in document understanding tasks. However, the large number of parameters in the resulting models makes building and using them prohibitive without access to high-performing processing units with large memory capacity. We propose an alternative approach that can create efficient representations without the need for a neural visual backbone. This leads to an 80% reduction in the number of parameters compared to the smallest SOTA model, widely expanding applicability. In addition, our layout embeddings are pre-trained on spatial and visual cues alone and only fused with text embeddings in downstream tasks, which can facilitate applicability to low-resource of multi-lingual domains. Despite using 2.5% of training data, we show competitive performance on two form understanding tasks: semantic labeling and link prediction.

Keywords: layout understanding, form understanding, multimodal document understanding, bias-augmented attention

Procedia PDF Downloads 116
10024 Black Box Model and Evolutionary Fuzzy Control Methods of Coupled-Tank System

Authors: S. Yaman, S. Rostami

Abstract:

In this study, a black box modeling of the coupled-tank system is obtained by using fuzzy sets. The derived model is tested via adaptive neuro fuzzy inference system (ANFIS). In order to achieve a better control performance, the parameters of three different controller types, classical proportional integral controller (PID), fuzzy PID and function tuner method, are tuned by one of the evolutionary computation method, genetic algorithm. All tuned controllers are applied to the fuzzy model of the coupled-tank experimental setup and analyzed under the different reference input values. According to the results, it is seen that function tuner method demonstrates better robust control performance and guarantees the closed loop stability.

Keywords: function tuner method (FTM), fuzzy modeling, fuzzy PID controller, genetic algorithm (GA)

Procedia PDF Downloads 274
10023 Modeling Exponential Growth Activity Using Technology: A Research with Bachelor of Business Administration Students

Authors: V. Vargas-Alejo, L. E. Montero-Moguel

Abstract:

Understanding the concept of function has been important in mathematics education for many years. In this study, the models built by a group of five business administration and accounting undergraduate students when carrying out a population growth activity are analyzed. The theoretical framework is the Models and Modeling Perspective. The results show how the students included tables, graphics, and algebraic representations in their models. Using technology was useful to interpret, describe, and predict the situation. The first model, the students built to describe the situation, was linear. After that, they modified and refined their ways of thinking; finally, they created exponential growth. Modeling the activity was useful to deep on mathematical concepts such as covariation, rate of change, and exponential function also to differentiate between linear and exponential growth.

Keywords: covariation reasoning, exponential function, modeling, representations

Procedia PDF Downloads 94
10022 Cobb Angle Measurement from Coronal X-Rays Using Artificial Neural Networks

Authors: Andrew N. Saylor, James R. Peters

Abstract:

Scoliosis is a complex 3D deformity of the thoracic and lumbar spines, clinically diagnosed by measurement of a Cobb angle of 10 degrees or more on a coronal X-ray. The Cobb angle is the angle made by the lines drawn along the proximal and distal endplates of the respective proximal and distal vertebrae comprising the curve. Traditionally, Cobb angles are measured manually using either a marker, straight edge, and protractor or image measurement software. The task of measuring the Cobb angle can also be represented by a function taking the spine geometry rendered using X-ray imaging as input and returning the approximate angle. Although the form of such a function may be unknown, it can be approximated using artificial neural networks (ANNs). The performance of ANNs is affected by many factors, including the choice of activation function and network architecture; however, the effects of these parameters on the accuracy of scoliotic deformity measurements are poorly understood. Therefore, the objective of this study was to systematically investigate the effect of ANN architecture and activation function on Cobb angle measurement from the coronal X-rays of scoliotic subjects. The data set for this study consisted of 609 coronal chest X-rays of scoliotic subjects divided into 481 training images and 128 test images. These data, which included labeled Cobb angle measurements, were obtained from the SpineWeb online database. In order to normalize the input data, each image was resized using bi-linear interpolation to a size of 500 × 187 pixels, and the pixel intensities were scaled to be between 0 and 1. A fully connected (dense) ANN with a fixed cost function (mean squared error), batch size (10), and learning rate (0.01) was developed using Python Version 3.7.3 and TensorFlow 1.13.1. The activation functions (sigmoid, hyperbolic tangent [tanh], or rectified linear units [ReLU]), number of hidden layers (1, 3, 5, or 10), and number of neurons per layer (10, 100, or 1000) were varied systematically to generate a total of 36 network conditions. Stochastic gradient descent with early stopping was used to train each network. Three trials were run per condition, and the final mean squared errors and mean absolute errors were averaged to quantify the network response for each condition. The network that performed the best used ReLU neurons had three hidden layers, and 100 neurons per layer. The average mean squared error of this network was 222.28 ± 30 degrees2, and the average mean absolute error was 11.96 ± 0.64 degrees. It is also notable that while most of the networks performed similarly, the networks using ReLU neurons, 10 hidden layers, and 1000 neurons per layer, and those using Tanh neurons, one hidden layer, and 10 neurons per layer performed markedly worse with average mean squared errors greater than 400 degrees2 and average mean absolute errors greater than 16 degrees. From the results of this study, it can be seen that the choice of ANN architecture and activation function has a clear impact on Cobb angle inference from coronal X-rays of scoliotic subjects.

Keywords: scoliosis, artificial neural networks, cobb angle, medical imaging

Procedia PDF Downloads 96
10021 Design of Membership Ranges for Fuzzy Logic Control of Refrigeration Cycle Driven by a Variable Speed Compressor

Authors: Changho Han, Jaemin Lee, Li Hua, Seokkwon Jeong

Abstract:

Design of membership function ranges in fuzzy logic control (FLC) is presented for robust control of a variable speed refrigeration system (VSRS). The criterion values of the membership function ranges can be carried out from the static experimental data, and two different values are offered to compare control performance. Some simulations and real experiments for the VSRS were conducted to verify the validity of the designed membership functions. The experimental results showed good agreement with the simulation results, and the error change rate and its sampling time strongly affected the control performance at transient state of the VSRS.

Keywords: variable speed refrigeration system, fuzzy logic control, membership function range, control performance

Procedia PDF Downloads 235
10020 Directional Implicit Functions in Nonsmooth Analysis

Authors: Murzabekova Gulden

Abstract:

Directional implicit functions for underdetermined nonsmooth systems in terms of the new tool of the Nonsmooth analysis - exhausters are considered. A method for finding an implicit function for underdetermined nonsmooth systems is proposed.

Keywords: implicit function, exhauster, nonsmooth systems

Procedia PDF Downloads 220
10019 Spatial Point Process Analysis of Dengue Fever in Tainan, Taiwan

Authors: Ya-Mei Chang

Abstract:

This research is intended to apply spatio-temporal point process methods to the dengue fever data in Tainan. The spatio-temporal intensity function of the dataset is assumed to be separable. The kernel estimation is a widely used approach to estimate intensity functions. The intensity function is very helpful to study the relation of the spatio-temporal point process and some covariates. The covariate effects might be nonlinear. An nonparametric smoothing estimator is used to detect the nonlinearity of the covariate effects. A fitted parametric model could describe the influence of the covariates to the dengue fever. The correlation between the data points is detected by the K-function. The result of this research could provide useful information to help the government or the stakeholders making decisions.

Keywords: dengue fever, spatial point process, kernel estimation, covariate effect

Procedia PDF Downloads 324
10018 A Preliminary Development of Virtual Sight-Seeing Website for Thai Temples on Rattanakosin Island

Authors: Pijitra Jomsri

Abstract:

Currently, the sources of cultures and tourist attractions are presented in online documentary form only. In order to make them more virtual, the researcher then collected and presented them in the form of Virtual Temple. The prototype, which is a replica of the actual location, was developed to the website and allows people who are interested in Rattanakosin Island can see in form of Panorama Pan View. By this way, anyone can access the data and appreciate the beauty of Rattanakosin Island in the virtual model like the real place. The result from the experiment showed that the levels of the knowledge on Thai temples in Rattanakosin Island increased; moreover, the users were highly satisfied with the systems. It can be concluded that virtual temples can support to publicize Thai arts, cultures and travels, as well as it can be utilized effectively.

Keywords: virtual sight-seeing, Rattanakosin Island, Thai temples, virtual temple

Procedia PDF Downloads 312