Search results for: the COS method
13519 Moral Rights: Judicial Evidence Insufficiency in the Determination of the Truth and Reasoning in Brazilian Morally Charged Cases
Authors: Rainner Roweder
Abstract:
Theme: The present paper aims to analyze the specificity of the judicial evidence linked to the subjects of dignity and personality rights, otherwise known as moral rights, in the determination of the truth and formation of the judicial reasoning in cases concerning these areas. This research is about the way courts in Brazilian domestic law search for truth and handles evidence in cases involving moral rights that are abundant and important in Brazil. The main object of the paper is to analyze the effectiveness of the evidence in the formation of judicial conviction in matters related to morally controverted rights, based on the Brazilian, and as a comparison, the Latin American legal systems. In short, the rights of dignity and personality are moral. However, the evidential legal system expects a rational demonstration of moral rights that generate judicial conviction or persuasion. Moral, in turn, tends to be difficult or impossible to demonstrate in court, generating the problem considered in this paper, that is, the study of the moral demonstration problem as proof in court. In this sense, the more linked to moral, the more difficult to be demonstrated in court that right is, expanding the field of judicial discretion, generating legal uncertainty. More specifically, the new personality rights, such as gender, and their possibility of alteration, further amplify the problem being essentially an intimate manner, which does not exist in the objective, rational evidential system, as normally occurs in other categories, such as contracts. Therefore, evidencing this legal category in court, with the level of security required by the law, is a herculean task. It becomes virtually impossible to use the same evidentiary system when judging the rights researched here; therefore, it generates the need for a new design of the evidential task regarding the rights of the personality, a central effort of the present paper. Methodology: Concerning the methodology, the Method used in the Investigation phase was Inductive, with the use of the comparative law method; in the data treatment phase, the Inductive Method was also used. Doctrine, Legislative, and jurisprudential comparison was the technique research used. Results: In addition to the peculiar characteristics of personality rights that are not found in other rights, part of them are essentially linked to morale and are not objectively verifiable by design, and it is necessary to use specific argumentative theories for their secure confirmation, such as interdisciplinary support. The traditional pragmatic theory of proof, for having an obvious objective character, when applied in the rights linked to the morale, aggravates decisionism and generates legal insecurity, being necessary its reconstruction for morally charged cases, with the possible use of the “predictive theory” ( and predictive facts) through algorithms in data collection and treatment.Keywords: moral rights, proof, pragmatic proof theory, insufficiency, Brazil
Procedia PDF Downloads 10913518 Fabrication and Characterization of Ceramic Matrix Composite
Authors: Yahya Asanoglu, Celaletdin Ergun
Abstract:
Ceramic-matrix composites (CMC) have significant prominence in various engineering applications because of their heat resistance associated with an ability to withstand the brittle type of catastrophic failure. In this study, specific raw materials have been chosen for the purpose of having suitable CMC material for high-temperature dielectric applications. CMC material will be manufactured through the polymer infiltration and pyrolysis (PIP) method. During the manufacturing process, vacuum infiltration and autoclave will be applied so as to decrease porosity and obtain higher mechanical properties, although this advantage leads to a decrease in the electrical performance of the material. Time and temperature adjustment in pyrolysis parameters provide a significant difference in the properties of the resulting material. The mechanical and thermal properties will be investigated in addition to the measurement of dielectric constant and tangent loss values within the spectrum of Ku-band (12 to 18 GHz). Also, XRD, TGA/PTA analyses will be employed to prove the transition of precursor to ceramic phases and to detect critical transition temperatures. Additionally, SEM analysis on the fracture surfaces will be performed to see failure mechanism whether there is fiber pull-out, crack deflection and others which lead to ductility and toughness in the material. In this research, the cost-effectiveness and applicability of the PIP method will be proven in the manufacture of CMC materials while optimization of pyrolysis time, temperature and cycle for specific materials is detected by experiment. Also, several resins will be shown to be a potential raw material for CMC radome and antenna applications. This research will be distinguished from previous related papers due to the fact that in this research, the combination of different precursors and fabrics will be experimented with to specify the unique cons and pros of each combination. In this way, this is an experimental sum of previous works with unique PIP parameters and a guide to the manufacture of CMC radome and antenna.Keywords: CMC, PIP, precursor, quartz
Procedia PDF Downloads 16013517 An Approach to Determine the in Transit Vibration to Fresh Produce Using Long Range Radio (LORA) Wireless Transducers
Authors: Indika Fernando, Jiangang Fei, Roger Stanely, Hossein Enshaei
Abstract:
Ever increasing demand for quality fresh produce by the consumers, had increased the gravity on the post-harvest supply chains in multi-fold in the recent years. Mechanical injury to fresh produce was a critical factor for produce wastage, especially with the expansion of supply chains, physically extending to thousands of miles. The impact of vibration damages in transit was identified as a specific area of focus which results in wastage of significant portion of the fresh produce, at times ranging from 10% to 40% in some countries. Several studies were concentrated on quantifying the impact of vibration to fresh produce, and it was a challenge to collect vibration impact data continuously due to the limitations in battery life or the memory capacity in the devices. Therefore, the study samples were limited to a stretch of the transit passage or a limited time of the journey. This may or may not give an accurate understanding of the vibration impacts encountered throughout the transit passage, which limits the accuracy of the results. Consequently, an approach which can extend the capacity and ability of determining vibration signals in the transit passage would contribute to accurately analyze the vibration damage along the post-harvest supply chain. A mechanism was developed to address this challenge, which is capable of measuring the in transit vibration continuously through the transit passage subject to a minimum acceleration threshold (0.1g). A system, consisting six tri-axel vibration transducers installed in different locations inside the cargo (produce) pallets in the truck, transmits vibration signals through LORA (Long Range Radio) technology to a central device installed inside the container. The central device processes and records the vibration signals transmitted by the portable transducers, along with the GPS location. This method enables to utilize power consumption for the portable transducers to maximize the capability of measuring the vibration impacts in the transit passage extending to days in the distribution process. The trial tests conducted using the approach reveals that it is a reliable method to measure and quantify the in transit vibrations along the supply chain. The GPS capability enables to identify the locations in the supply chain where the significant vibration impacts were encountered. This method contributes to determining the causes, susceptibility and intensity of vibration impact damages to fresh produce in the post-harvest supply chain. Extensively, the approach could be used to determine the vibration impacts not limiting to fresh produce, but for products in supply chains, which may extend from few hours to several days in transit.Keywords: post-harvest, supply chain, wireless transducers, LORA, fresh produce
Procedia PDF Downloads 26513516 Compensation Strategies and Their Effects on Employees' Motivation and Organizational Citizenship Behaviour in Some Manufacturing Companies in Lagos, Nigeria
Authors: Ade Oyedijo
Abstract:
This paper reports the findings of a study on the strategic and organizational antecedents and effects of two opposing pay patterns used by some manufacturing companies in Lagos Nigeria with particular reference to the behavioural correlates of the pay strategies considered. The assumed relationship between pay strategies and some organizational correlates such as business and corporate strategies and firm size was considered problematic in view of their likely implications for employee motivation and citizenship behaviour and firm performance. The survey research method was used for the study. Structured, close ended questions were used to collect primary data from the respondents. A multipart Likert scale was used to measure the pay orientations of the respondent firms and the job and organizational involvement of the respondent employees. Utilizing hierarchical linear regression method and "t-test" to analyze the data obtained from 48 manufacturing companies of various sizes and strategies, it was found that the dominant pattern of employee compensation in the sampled manufacturing companies. The study also revealed that the choice of a pay strategy was strongly influenced by organizational size as well as the type of business and corporate level strategies adopted by afirm. Firms pursuing a strategy of related and unrelated diversification are more likely to adopt the algorithmic compensation system than single product firms because of their relatively larger size and scope. However; firms that pursue a competitive advantage through a business level strategy of cost efficiency are more likely to use the experiential, variable pay strategy. The study found that an algorithmic compensation strategy is as effective as experiential compensation strategy in the promotion of organizational citizenship behaviour and motivation of employees.Keywords: compensation, corporate strategy, business strategy, motivation, citizenship behaviour, algorithmic, experiential, organizational commitment, work environment
Procedia PDF Downloads 39113515 Pre-Service Mathematics Teachers’ Mental Construction in Solving Equations and Inequalities Using ACE Teaching Cycle
Authors: Abera Kotu, Girma Tesema, Mitiku Tadesse
Abstract:
This study investigated ACE supported instruction and pre-service mathematics teachers’ mental construction in solving equations and inequalities. A mixed approach with concurrent parallel design was employed. It was conducted on two intact groups of regular first-year pre-service mathematics teachers at Fiche College of Teachers’ Education in which one group was assigned as an intervention group and the other group as a comparison group using the lottery method. There were 33 participants in the intervention and 32 participants in the comparison. Six pre-service mathematics teachers were selected for interview using purposive sampling based on pre-test results. An instruction supported with ACE cycle was given to the intervention group for two weeks duration of time. Written tasks, interviews, and observations were used to collect data. Data collected from written tasks were analyzed quantitatively using independent samples t-test and effect size. Data collected from interviews and observations were analyzed narratively. The findings of the study uncovered that ACE-supported instruction has a moderate effect on Pre-service Mathematics Teachers’ levels of conceptualizations of action, process, object, ad schema. Moreover, the ACE supported group out scored and performed better than the usual traditional method supported groups across the levels of conceptualization. The majority of pre-service mathematics teachers’ levels of conceptualizations were at action and process levels and their levels of conceptualization were linked with genetic decomposition more at action and object levels than object and schema. The use of ACE supported instruction is recommended to improve pre-service mathematics teachers’ mental construction.Keywords: ACE teaching cycle, APOS theory, mental construction, genetic composition
Procedia PDF Downloads 1813514 Bioelectronic System for Continuous Monitoring of Cardiac Activity of Benthic Invertebrates for the Assessment of a Surface Water Quality
Authors: Sergey Kholodkevich, Tatiana Kuznetsova
Abstract:
The objective assessment of ecological state of water ecosystems is impossible without the use of biological methods of the environmental monitoring capable in the integrated look to reveal negative for biota changes of quality of water as habitats. Considerable interest for the development of such methods of environmental quality control represents biomarker approach. Measuring systems, by means of which register cardiac activity characteristics, received the name of bioelectronic. Bioelectronic systems are information and measuring systems in which animals (namely, benthic invertebrates) are directly included in structure of primary converters, being an integral part of electronic system of registration of these or those physiological or behavioural biomarkers. As physiological biomarkers various characteristics of cardiac activity of selected invertebrates have been used in bioelectronic system.lChanges in cardiac activity are considered as integrative measures of the physiological condition of organisms, which reflect the state of the environment of their dwelling. Greatest successes in the development of tools of biological methods and technologies of an assessment of surface water quality in real time. Essential advantage of bioindication of water quality by such tool is a possibility of an integrated assessment of biological effects of pollution on biota and also the expressness of such method and used approaches. In the report the practical experience of authors in biomonitoring and bioindication of an ecological condition of sea, brackish- and freshwater areas is discussed. Authors note that the method of non-invasive cardiac activity monitoring of selected invertebrates can be used not only for the advancement of biomonitoring, but also is useful in decision of general problems of comparative physiology of the invertebrates.Keywords: benthic invertebrates, physiological state, heart rate monitoring, water quality assessment
Procedia PDF Downloads 71813513 Generation of Research Ideas Through a Matrix in the Field of International Comparative Education
Authors: Saleh Alzahrani
Abstract:
The studies in the field of International Comparative Education in the Arabic world and the middle east are scarcity. However, some International Comparative Education Researchers and post graduates face a challenge concerning of a selection of a distinguished study to improve their national education system. It requires a considerable effort. According to that, the matrix of scientific research in comparative and international education is designed to help specialists, researchers and graduate students in generating a variety of research ideas in a short time in this field. The matrix is built by using content analysis method of comparative education research published in the Arab journals from 1980 to 2017. Then, qualitative input with the in-depth focus analysis tool is utilized according to the root theory. The matrix consists of two axes; vertical (X) and horizontal (Y). The number of fields in the vertical axis are 6 domains, including 105 variables. The horizontal axis is two fields which are pre-university education that incorporate educational stages and contemporary formulations including (23) variables. The second field is the university education in its public universities and contemporary formulas including (15) variables. The researcher can access topics, ideas and research points through the matrix of scientific research in comparative and international education by selecting of any subject on the vertical axis (X) from (1) to (105) and selecting of any subject on the horizontal axis (Y) from (B) to (U). The cell where the axes intersect with the chosen fields can generate an idea or a research point conveniently and easily through the words that have been monitored by the user. These steps can be repeated to generate new ideas and research points. Many graduate researchers have been trained on using of this matrix which gave them more potential to generate an appropriate study serving the national education.Keywords: content analysis method, comparative education, international education, matrix, root theory
Procedia PDF Downloads 13313512 Study the Effect of Liquefaction on Buried Pipelines during Earthquakes
Authors: Mohsen Hababalahi, Morteza Bastami
Abstract:
Buried pipeline damage correlations are critical part of loss estimation procedures applied to lifelines for future earthquakes. The vulnerability of buried pipelines against earthquake and liquefaction has been observed during some of previous earthquakes and there are a lot of comprehensive reports about this event. One of the main reasons for impairment of buried pipelines during earthquake is liquefaction. Necessary conditions for this phenomenon are loose sandy soil, saturation of soil layer and earthquake intensity. Because of this fact that pipelines structure are very different from other structures (being long and having light mass) by paying attention to the results of previous earthquakes and compare them with other structures, it is obvious that the danger of liquefaction for buried pipelines is not high risked, unless effective parameters like earthquake intensity and non-dense soil and other factors be high. Recent liquefaction researches for buried pipeline include experimental and theoretical ones as well as damage investigations during actual earthquakes. The damage investigations have revealed that a damage ratio of pipelines (Number/km ) has much larger values in liquefied grounds compared with one in shaking grounds without liquefaction according to damage statistics during past severe earthquakes, and that damages of joints and pipelines connected with manholes were remarkable. The purpose of this research is numerical study of buried pipelines under the effect of liquefaction by case study of the 2013 Dashti (Iran) earthquake. Water supply and electrical distribution systems of this township interrupted during earthquake and water transmission pipelines were damaged severely due to occurrence of liquefaction. The model consists of a polyethylene pipeline with 100 meters length and 0.8 meter diameter which is covered by light sandy soil and the depth of burial is 2.5 meters from surface. Since finite element method is used relatively successfully in order to solve geotechnical problems, we used this method for numerical analysis. For evaluating this case, some information like geotechnical information, classification of earthquakes levels, determining the effective parameters in probability of liquefaction, three dimensional numerical finite element modeling of interaction between soil and pipelines are necessary. The results of this study on buried pipelines indicate that the effect of liquefaction is function of pipe diameter, type of soil, and peak ground acceleration. There is a clear increase in percentage of damage with increasing the liquefaction severity. The results indicate that although in this form of the analysis, the damage is always associated to a certain pipe material, but the nominally defined “failures” include by failures of particular components (joints, connections, fire hydrant details, crossovers, laterals) rather than material failures. At the end, there are some retrofit suggestions in order to decrease the risk of liquefaction on buried pipelines.Keywords: liquefaction, buried pipelines, lifelines, earthquake, finite element method
Procedia PDF Downloads 51313511 Hypersonic Flow of CO2-N2 Mixture around a Spacecraft during the Atmospheric Reentry
Authors: Zineddine Bouyahiaoui, Rabah Haoui
Abstract:
The aim of this work is to analyze a flow around the axisymmetric blunt body taken into account the chemical and vibrational nonequilibrium flow. This work concerns the entry of spacecraft in the atmosphere of the planet Mars. Since the equations involved are non-linear partial derivatives, the volume method is the only way to solve this problem. The choice of the mesh and the CFL is a condition for the convergence to have the stationary solution.Keywords: blunt body, finite volume, hypersonic flow, viscous flow
Procedia PDF Downloads 23413510 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method
Authors: Jurriaan Gillissen
Abstract:
This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence
Procedia PDF Downloads 22413509 Increased Cytolytic Activity of Effector T-Cells against Cholangiocarcinoma Cells by Self-Differentiated Dendritic Cells with Down-Regulation of Interleukin-10 and Transforming Growth Factor-β Receptors
Authors: Chutamas Thepmalee, Aussara Panya, Mutita Junking, Jatuporn Sujjitjoon, Nunghathai Sawasdee, Pa-Thai Yenchitsomanus
Abstract:
Cholangiocarcinoma (CCA) is an aggressive malignancy of bile duct epithelial cells in which the standard treatments, including surgery, radiotherapy, chemotherapy, and targeted therapy are partially effective. Many solid tumors including CCA escape host immune responses by creating tumor microenvironment and generating immunosuppressive cytokines such as interleukin-10 (IL-10) and transforming growth factor-β (TGF-β). These cytokines can inhibit dendritic cell (DC) differentiation and function, leading to decreased activation and response of effector CD4+ and CD8+ T cells for cancer cell elimination. To overcome the effects of these immunosuppressive cytokines and to increase ability of DC to activate effector CD4+ and CD8+ T cells, we generated self-differentiated DCs (SD-DCs) with down-regulation of IL-10 and TGF-β receptors for activation of effector CD4+ and CD8+ T cells. Human peripheral blood monocytes were initially transduced with lentiviral particles containing the genes encoding GM-CSF and IL-4 and then secondly transduced with lentiviral particles containing short-hairpin RNAs (shRNAs) to knock-down mRNAs of IL-10 and TGF-β receptors. The generated SD-DCs showed up-regulation of MHC class II (HLA-DR) and co-stimulatory molecules (CD40 and CD86), comparable to those of DCs generated by convention method. Suppression of IL-10 and TGF-β receptors on SD-DCs by specific shRNAs significantly increased levels of IFN-γ and also increased cytolytic activity of DC-activated effector T cells against CCA cell lines (KKU-213 and KKU-100), but it had little effect to immortalized cholangiocytes (MMNK-1). Thus, SD-DCs with down-regulation of IL-10 and TGF-β receptors increased activation of effector T cells, which is a recommended method to improve DC function for the preparation of DC-activated effector T cells for adoptive T-cell therapy.Keywords: cholangiocarcinoma, IL-10 receptor, self-differentiated dendritic cells, TGF-β receptor
Procedia PDF Downloads 14113508 Approach on Conceptual Design and Dimensional Synthesis of the Linear Delta Robot for Additive Manufacturing
Authors: Efrain Rodriguez, Cristhian Riano, Alberto Alvares
Abstract:
In recent years, robots manipulators with parallel architectures are used in additive manufacturing processes – 3D printing. These robots have advantages such as speed and lightness that make them suitable to help with the efficiency and productivity of these processes. Consequently, the interest for the development of parallel robots for additive manufacturing applications has increased. This article deals with the conceptual design and dimensional synthesis of the linear delta robot for additive manufacturing. Firstly, a methodology based on structured processes for the development of products through the phases of informational design, conceptual design and detailed design is adopted: a) In the informational design phase the Mudge diagram and the QFD matrix are used to aid a set of technical requirements, to define the form, functions and features of the robot. b) In the conceptual design phase, the functional modeling of the system through of an IDEF0 diagram is performed, and the solution principles for the requirements are formulated using a morphological matrix. This phase includes the description of the mechanical, electro-electronic and computational subsystems that constitute the general architecture of the robot. c) In the detailed design phase, a digital model of the robot is drawn on CAD software. A list of commercial and manufactured parts is detailed. Tolerances and adjustments are defined for some parts of the robot structure. The necessary manufacturing processes and tools are also listed, including: milling, turning and 3D printing. Secondly, a dimensional synthesis method applied on design of the linear delta robot is presented. One of the most important key factors in the design of a parallel robot is the useful workspace, which strongly depends on the joint space, the dimensions of the mechanism bodies and the possible interferences between these bodies. The objective function is based on the verification of the kinematic model for a prescribed cylindrical workspace, considering geometric constraints that possibly lead to singularities of the mechanism. The aim is to determine the minimum dimensional parameters of the mechanism bodies for the proposed workspace. A method based on genetic algorithms was used to solve this problem. The method uses a cloud of points with the cylindrical shape of the workspace and checks the kinematic model for each of the points within the cloud. The evolution of the population (point cloud) provides the optimal parameters for the design of the delta robot. The development process of the linear delta robot with optimal dimensions for additive manufacture is presented. The dimensional synthesis enabled to design the mechanism of the delta robot in function of the prescribed workspace. Finally, the implementation of the robotic platform developed based on a linear delta robot in an additive manufacturing application using the Fused Deposition Modeling (FDM) technique is presented.Keywords: additive manufacturing, delta parallel robot, dimensional synthesis, genetic algorithms
Procedia PDF Downloads 19013507 Value Chain Network: A Social Network Analysis of the Value Chain Actors of Recycled Polymer Products in Lagos Metropolis, Nigeria
Authors: Olamide Shittu, Olayinka Akanle
Abstract:
Value Chain Analysis is a common method of examining the stages involved in the production of a product, mostly agricultural produce, from the input to the consumption stage including the actors involved in each stage. However, the Functional Institutional Analysis is the most common method in literature employed to analyze the value chain of products. Apart from studying the relatively neglected phenomenon of recycled polymer products in Lagos Metropolis, this paper adopted the use of social network analysis to attempt a grounded theory of the nature of social network that exists among the value chain actors of the subject matter. The study adopted a grounded theory approach by conducting in-depth interviews, administering questionnaires and conducting observations among the identified value chain actors of recycled polymer products in Lagos Metropolis, Nigeria. The thematic analysis of the collected data gave the researchers the needed background to formulate a truly representative network of the social relationships among the value chain actors of recycled polymer products in Lagos Metropolis. The paper introduced concepts such as Transient and Perennial Social Ties to explain the observed social relations among the actors. Some actors have more social capital than others as a result of the structural holes that exist in their triad network. Households and resource recoverers are at disadvantaged position in the network as they have high constraints in their relationships with other actors. The study attempted to provide a new perspective in the study of the environmental value chain by analyzing the network of actors to bring about policy action points and improve recycling in Nigeria. Government and social entrepreneurs can exploit the structural holes that exist in the network for the socio-economic and sustainable development of the state.Keywords: recycled polymer products, social network analysis, social ties, value chain analysis
Procedia PDF Downloads 41013506 Experimental Modal Analysis of a Suspended Composite Beam
Authors: First A. Lahmar Lahbib, Second B. Abdeldjebar Rabiâ, Third C. Moudden B, forth D. Missoum L
Abstract:
Vibration tests are used to identify the elasticity modulus in two directions. This strategy is applied to composite materials glass / polyester. Experimental results made on a specimen in free vibration showed the efficiency of this method. Obtained results were validated by a comparison to results stemming from static tests.Keywords: beam, characterization, composite, elasticity modulus, vibration.
Procedia PDF Downloads 46313505 Fabricating Method for Complex 3D Microfluidic Channel Using Soluble Wax Mold
Authors: Kyunghun Kang, Sangwoo Oh, Yongha Hwang
Abstract:
PDMS (Polydimethylsiloxane)-based microfluidic device has been recently applied to area of biomedical research, tissue engineering, and diagnostics because PDMS is low cost, nontoxic, optically transparent, gas-permeable, and especially biocompatible. Generally, PDMS microfluidic devices are fabricated by conventional soft lithography. Microfabrication requires expensive cleanroom facilities and a lot of time; however, only two-dimensional or simple three-dimensional structures can be fabricated. In this study, we introduce fabricating method for complex three-dimensional microfluidic channels using soluble wax mold. Using the 3D printing technique, we firstly fabricated three-dimensional mold which consists of soluble wax material. The PDMS pre-polymer is cast around, followed by PDMS casting and curing. The three-dimensional casting mold was removed from PDMS by chemically dissolved with methanol and acetone. In this work, two preliminary experiments were carried out. Firstly, the solubility of several waxes was tested using various solvents, such as acetone, methanol, hexane, and IPA. We found the combination between wax and solvent which dissolves the wax. Next, side effects of the solvent were investigated during the curing process of PDMS pre-polymer. While some solvents let PDMS drastically swell, methanol and acetone let PDMS swell only 2% and 6%, respectively. Thus, methanol and acetone can be used to dissolve wax in PDMS without any serious impact. Based on the preliminary tests, three-dimensional PDMS microfluidic channels was fabricated using the mold which was printed out using 3D printer. With the proposed fabricating technique, PDMS-based microfluidic devices have advantages of fast prototyping, low cost, optically transparence, as well as having complex three-dimensional geometry. Acknowledgements: This research was supported by Supported by a Korea University Grant and Basic Science Research Program through the National Research Foundation of Korea(NRF).Keywords: microfluidic channel, polydimethylsiloxane, 3D printing, casting
Procedia PDF Downloads 27413504 A Low Cost Education Proposal Using Strain Gauges and Arduino to Develop a Balance
Authors: Thais Cavalheri Santos, Pedro Jose Gabriel Ferreira, Alexandre Daliberto Frugoli, Lucio Leonardo, Pedro Americo Frugoli
Abstract:
This paper presents a low cost education proposal to be used in engineering courses. The engineering education in universities of a developing country that is in need of an increasing number of engineers carried out with quality and affordably, pose a difficult problem to solve. In Brazil, the political and economic scenario requires academic managers able to reduce costs without compromising the quality of education. Within this context, the elaboration of a physics principles teaching method with the construction of an electronic balance is proposed. First, a method to develop and construct a load cell through which the students can understand the physical principle of strain gauges and bridge circuit will be proposed. The load cell structure was made with aluminum 6351T6, in dimensions of 80 mm x 13 mm x 13 mm and for its instrumentation, a complete Wheatstone Bridge was assembled with strain gauges of 350 ohms. Additionally, the process involves the use of a software tool to document the prototypes (design circuits), the conditioning of the signal, a microcontroller, C language programming as well as the development of the prototype. The project also intends to use an open-source I/O board (Arduino Microcontroller). To design the circuit, the Fritizing software will be used and, to program the controller, an open-source software named IDE®. A load cell was chosen because strain gauges have accuracy and their use has several applications in the industry. A prototype was developed for this study, and it confirmed the affordability of this educational idea. Furthermore, the goal of this proposal is to motivate the students to understand the several possible applications in high technology of the use of load cells and microcontroller.Keywords: Arduino, load cell, low-cost education, strain gauge
Procedia PDF Downloads 30313503 Gaming Mouse Redesign Based on Evaluation of Pragmatic and Hedonic Aspects of User Experience
Authors: Thedy Yogasara, Fredy Agus
Abstract:
In designing a product, it is currently crucial to focus not only on the product’s usability based on performance measures, but also on user experience (UX) that includes pragmatic and hedonic aspects of product use. These aspects play a significant role in fulfillment of user needs, both functionally and psychologically. Pragmatic quality refers to as product’s perceived ability to support the fulfillment of behavioral goals. It is closely linked to functionality and usability of the product. In contrast, hedonic quality is product’s perceived ability to support the fulfillment of psychological needs. Hedonic quality relates to the pleasure of ownership and use of the product, including stimulation for personal development and communication of user’s identity to others through the product. This study evaluates the pragmatic and hedonic aspects of gaming mice G600 and Razer Krait using AttrakDiff tool to create an improved design that is able to generate positive UX. AttrakDiff is a method that measures pragmatic and hedonic scores of a product with a scale between -3 to +3 through four attributes (i.e. Pragmatic Quality, Hedonic Quality-Identification, Hedonic Quality-Stimulation, and Attractiveness), represented by 28 pairs of opposite words. Based on data gathered from 15 participants, it is identified that gaming mouse G600 needs to be redesigned because of its low grades (pragmatic score: -0.838, hedonic score: 1, attractiveness score: 0.771). The redesign process focuses on the attributes with poor scores and takes into account improvement suggestions collected from interview with the participants. The redesigned mouse G600 is evaluated using the previous method. The result shows higher scores in pragmatic quality (1.929), hedonic quality (1.703), and attractiveness (1.667), indicating that the redesigned mouse is more capable of creating pleasurable experience of product use.Keywords: AttrakDiff, hedonic aspect, pragmatic aspect, product design, user experience
Procedia PDF Downloads 15713502 A Paradigm Shift in Patent Protection-Protecting Methods of Doing Business: Implications for Economic Development in Africa
Authors: Odirachukwu S. Mwim, Tana Pistorius
Abstract:
Since the early 1990s political and economic pressures have been mounted on policy and law makers to increase patent protection by raising the protection standards. The perception of the relation between patent protection and development, particularly economic development, has evolved significantly in the past few years. Debate on patent protection in the international arena has been significantly influenced by the perception that there is a strong link between patent protection and economic development. The level of patent protection determines the extent of development that can be achieved. Recently there has been a paradigm shift with a lot of emphasis on extending patent protection to method of doing business generally referred to as Business Method Patenting (BMP). The general perception among international organizations and the private sectors also indicates that there is a strong correlation between BMP protection and economic growth. There are two diametrically opposing views as regards the relation between Intellectual Property (IP) protection and development and innovation. One school of thought promotes the view that IP protection improves economic development through stimulation of innovation and creativity. The other school advances the view that IP protection is unnecessary for stimulation of innovation and creativity and is in fact a hindrance to open access to resources and information required for innovative and creative modalities. Therefore, different theories and policies attach different levels of protection to BMP which have specific implications for economic growth. This study examines the impact of BMP protection on development by focusing on the challenges confronting economic growth in African communities as a result of the new paradigm in patent law. (Africa is used as a single unit in this study but this should not be construed as African homogeneity. Rather, the views advanced in this study are used to address the common challenges facing many communities in Africa). The study reviews (from the point of views of legal philosophers, policy makers and decisions of competent courts) the relevant literature, patent legislation particularly the International Treaty, policies and legal judgments. Findings from this study suggest that over and above the various criticisms levelled against the extreme liberal approach to the recognition of business methods as patentable subject matter, there are other specific implications that are associated with such approach. The most critical implication of extending patent protection to business methods is the locking-up of knowledge which may hamper human development in general and economic development in particular. Locking up knowledge necessary for economic advancement and competitiveness may have a negative effect on economic growth by promoting economic exclusion, particularly in African communities. This study suggests that knowledge of BMP within the African context and the extent of protection linked to it is crucial in achieving a sustainable economic growth in Africa. It also suggests that a balance is struck between the two diametrically opposing views.Keywords: Africa, business method patenting, economic growth, intellectual property, patent protection
Procedia PDF Downloads 12713501 CO₂ Absorption Studies Using Amine Solvents with Fourier Transform Infrared Analysis
Authors: Avoseh Funmilola, Osman Khalid, Wayne Nelson, Paramespri Naidoo, Deresh Ramjugernath
Abstract:
The increasing global atmospheric temperature is of great concern and this has led to the development of technologies to reduce the emission of greenhouse gases into the atmosphere. Flue gas emissions from fossil fuel combustion are major sources of greenhouse gases. One of the ways to reduce the emission of CO₂ from flue gases is by post combustion capture process and this can be done by absorbing the gas into suitable chemical solvents before emitting the gas into the atmosphere. Alkanolamines are promising solvents for this capture process. Vapour liquid equilibrium of CO₂-alkanolamine systems is often represented by CO₂ loading and partial pressure of CO₂ without considering the liquid phase. The liquid phase of this system is a complex one comprising of 9 species. Online analysis of the process is important to monitor the concentrations of the liquid phase reacting and product species. Liquid phase analysis of CO₂-diethanolamine (DEA) solution was performed by attenuated total reflection Fourier transform infrared (ATR-FTIR) spectroscopy. A robust Calibration was performed for the CO₂-aqueous DEA system prior to an online monitoring experiment. The partial least square regression method was used for the analysis of the calibration spectra obtained. The models obtained were used for prediction of DEA and CO₂ concentrations in the online monitoring experiment. The experiment was performed with a newly built recirculating experimental set up in the laboratory. The set up consist of a 750 ml equilibrium cell and ATR-FTIR liquid flow cell. Measurements were performed at 400°C. The results obtained indicated that the FTIR spectroscopy combined with Partial least square method is an effective tool for online monitoring of speciation.Keywords: ATR-FTIR, CO₂ capture, online analysis, PLS regression
Procedia PDF Downloads 19813500 A Comparative Study of Cognitive Functions in Relapsing-Remitting Multiple Sclerosis Patients, Secondary-Progressive Multiple Sclerosis Patients and Normal People
Authors: Alireza Pirkhaefi
Abstract:
Background: Multiple sclerosis (MS) is one of the most common diseases of the central nervous system (brain and spinal cord). Given the importance of cognitive disorders in patients with multiple sclerosis, the present study was in order to compare cognitive functions (Working memory, Attention and Centralization, and Visual-spatial perception) in patients with relapsing- remitting multiple sclerosis (RRMS) and secondary progressive multiple sclerosis (SPMS). Method: Present study was performed as a retrospective study. This research was conducted with Ex-Post Facto method. The samples of research consisted of 60 patients with multiple sclerosis (30 patients relapsing-retrograde and 30 patients secondary progressive), who were selected from Tehran Community of MS Patients Supported as convenience sampling. 30 normal persons were also selected as a comparison group. Montreal Cognitive Assessment (MOCA) was used to assess cognitive functions. Data were analyzed using multivariate analysis of variance. Results: The results showed that there were significant differences among cognitive functioning in patients with RRMS, SPMS, and normal individuals. There were not significant differences in working memory between two groups of patients with RRMS and SPMS; while significant differences in these variables were seen between the two groups and normal individuals. Also, results showed significant differences in attention and centralization and visual-spatial perception among three groups. Conclusions: Results showed that there are differences between cognitive functions of RRMS and SPMS patients so that the functions of RRMS patients are better than SPMS patients. These results have a critical role in improvement of cognitive functions; reduce the factors causing disability due to cognitive impairment, and especially overall health of society.Keywords: multiple sclerosis, cognitive function, secondary-progressive, normal subjects
Procedia PDF Downloads 23913499 Machine Learning Techniques in Bank Credit Analysis
Authors: Fernanda M. Assef, Maria Teresinha A. Steiner
Abstract:
The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.Keywords: artificial neural networks (ANNs), classifier algorithms, credit risk assessment, logistic regression, machine Learning, support vector machines
Procedia PDF Downloads 10313498 The Phenomenon of Rockfall in the Traceca Corridor and the Choice of Engineering Measures to Combat It
Authors: I. Iremashvili, I. Pirtskhalaishvili, K. Kiknadze, F. Lortkipanidze
Abstract:
The paper deals with the causes of rockfall and its possible consequences on slopes adjacent to motorways and railways. A list of measures is given that hinder rockfall; these measures are directed at protecting roads from rockfalls, and not preventing them. From the standpoint of local stability of slopes the main effective measure is perhaps strengthening their surface by the method of filling, which will check or end (or both) the process of deformation, local slipping off, sliding off and development of erosion.Keywords: rockfall, concrete spraying, heliodevices, railways
Procedia PDF Downloads 37413497 Architectural Wind Data Maps Using an Array of Wireless Connected Anemometers
Authors: D. Serero, L. Couton, J. D. Parisse, R. Leroy
Abstract:
In urban planning, an increasing number of cities require wind analysis to verify comfort of public spaces and around buildings. These studies are made using computer fluid dynamic simulation (CFD). However, this technique is often based on wind information taken from meteorological stations located at several kilometers of the spot of analysis. The approximated input data on project surroundings produces unprecise results for this type of analysis. They can only be used to get general behavior of wind in a zone but not to evaluate precise wind speed. This paper presents another approach to this problem, based on collecting wind data and generating an urban wind cartography using connected ultrasound anemometers. They are wireless devices that send immediate data on wind to a remote server. Assembled in array, these devices generate geo-localized data on wind such as speed, temperature, pressure and allow us to compare wind behavior on a specific site or building. These Netatmo-type anemometers communicate by wifi with central equipment, which shares data acquired by a wide variety of devices such as wind speed, indoor and outdoor temperature, rainfall, and sunshine. Beside its precision, this method extracts geo-localized data on any type of site that can be feedback looped in the architectural design of a building or a public place. Furthermore, this method allows a precise calibration of a virtual wind tunnel using numerical aeraulic simulations (like STAR CCM + software) and then to develop the complete volumetric model of wind behavior over a roof area or an entire city block. The paper showcases connected ultrasonic anemometers, which were implanted for an 18 months survey on four study sites in the Grand Paris region. This case study focuses on Paris as an urban environment with multiple historical layers whose diversity of typology and buildings allows considering different ways of capturing wind energy. The objective of this approach is to categorize the different types of wind in urban areas. This, particularly the identification of the minimum and maximum wind spectrum, helps define the choice and performance of wind energy capturing devices that could be implanted there. The localization on the roof of a building, the type of wind, the altimetry of the device in relation to the levels of the roofs, the potential nuisances generated. The method allows identifying the characteristics of wind turbines in order to maximize their performance in an urban site with turbulent wind.Keywords: computer fluid dynamic simulation in urban environment, wind energy harvesting devices, net-zero energy building, urban wind behavior simulation, advanced building skin design methodology
Procedia PDF Downloads 10113496 Amblyopia and Eccentric Fixation
Authors: Kristine Kalnica-Dorosenko, Aiga Svede
Abstract:
Amblyopia or 'lazy eye' is impaired or dim vision without obvious defect or change in the eye. It is often associated with abnormal visual experience, most commonly strabismus, anisometropia or both, and form deprivation. The main task of amblyopia treatment is to ameliorate etiological factors to create a clear retinal image and, to ensure the participation of the amblyopic eye in the visual process. The treatment of amblyopia and eccentric fixation is usually associated with problems in the therapy. Eccentric fixation is present in around 44% of all patients with amblyopia and in 30% of patients with strabismic amblyopia. In Latvia, amblyopia is carefully treated in various clinics, but eccentricity diagnosis is relatively rare. Conflict which has developed relating to the relationship between the visual disorder and the degree of eccentric fixation in amblyopia should to be rethoughted, because it has an important bearing on the cause and treatment of amblyopia, and the role of the eccentric fixation in this case. Visuoscopy is the most frequently used method for determination of eccentric fixation. With traditional visuoscopy, a fixation target is projected onto the patient retina, and the examiner asks to look straight directly at the center of the target. An optometrist then observes the point on the macula used for fixation. This objective test provides clinicians with direct observation of the fixation point of the eye. It requires patients to voluntarily fixate the target and assumes the foveal reflex accurately demarcates the center of the foveal pit. In the end, by having a very simple method to evaluate fixation, it is possible to indirectly evaluate treatment improvement, as eccentric fixation is always associated with reduced visual acuity. So, one may expect that if eccentric fixation in amlyopic eye is found with visuoscopy, then visual acuity should be less than 1.0 (in decimal units). With occlusion or another amblyopia therapy, one would expect both visual acuity and fixation to improve simultaneously, that is fixation would become more central. Consequently, improvement in fixation pattern by treatment is an indirect measurement of improvement of visual acuity. Evaluation of eccentric fixation in the child may be helpful in identifying amblyopia in children prior to measurement of visual acuity. This is very important because the earlier amblyopia is diagnosed – the better the chance of improving visual acuity.Keywords: amblyopia, eccentric fixation, visual acuity, visuoscopy
Procedia PDF Downloads 15813495 Lexical Based Method for Opinion Detection on Tripadvisor Collection
Authors: Faiza Belbachir, Thibault Schienhinski
Abstract:
The massive development of online social networks allows users to post and share their opinions on various topics. With this huge volume of opinion, it is interesting to extract and interpret these information for different domains, e.g., product and service benchmarking, politic, system of recommendation. This is why opinion detection is one of the most important research tasks. It consists on differentiating between opinion data and factual data. The difficulty of this task is to determine an approach which returns opinionated document. Generally, there are two approaches used for opinion detection i.e. Lexical based approaches and Machine Learning based approaches. In Lexical based approaches, a dictionary of sentimental words is used, words are associated with weights. The opinion score of document is derived by the occurrence of words from this dictionary. In Machine learning approaches, usually a classifier is trained using a set of annotated document containing sentiment, and features such as n-grams of words, part-of-speech tags, and logical forms. Majority of these works are based on documents text to determine opinion score but dont take into account if these texts are really correct. Thus, it is interesting to exploit other information to improve opinion detection. In our work, we will develop a new way to consider the opinion score. We introduce the notion of trust score. We determine opinionated documents but also if these opinions are really trustable information in relation with topics. For that we use lexical SentiWordNet to calculate opinion and trust scores, we compute different features about users like (numbers of their comments, numbers of their useful comments, Average useful review). After that, we combine opinion score and trust score to obtain a final score. We applied our method to detect trust opinions in TRIPADVISOR collection. Our experimental results report that the combination between opinion score and trust score improves opinion detection.Keywords: Tripadvisor, opinion detection, SentiWordNet, trust score
Procedia PDF Downloads 19913494 Damage-Based Seismic Design and Evaluation of Reinforced Concrete Bridges
Authors: Ping-Hsiung Wang, Kuo-Chun Chang
Abstract:
There has been a common trend worldwide in the seismic design and evaluation of bridges towards the performance-based method where the lateral displacement or the displacement ductility of bridge column is regarded as an important indicator for performance assessment. However, the seismic response of a bridge to an earthquake is a combined result of cyclic displacements and accumulated energy dissipation, causing damage to the bridge, and hence the lateral displacement (ductility) alone is insufficient to tell its actual seismic performance. This study aims to propose a damage-based seismic design and evaluation method for reinforced concrete bridges on the basis of the newly developed capacity-based inelastic displacement spectra. The capacity-based inelastic displacement spectra that comprise an inelastic displacement ratio spectrum and a corresponding damage state spectrum was constructed by using a series of nonlinear time history analyses and a versatile, smooth hysteresis model. The smooth model could take into account the effects of various design parameters of RC bridge columns and correlates the column’s strength deterioration with the Park and Ang’s damage index. It was proved that the damage index not only can be used to accurately predict the onset of strength deterioration, but also can be a good indicator for assessing the actual visible damage condition of column regardless of its loading history (i.e., similar damage index corresponds to similar actual damage condition for the same designed columns subjected to very different cyclic loading protocols as well as earthquake loading), providing a better insight into the seismic performance of bridges. Besides, the computed spectra show that the inelastic displacement ratio for far-field ground motions approximately conforms to the equal displacement rule when structural period is larger than around 0.8 s, but that for near-fault ground motions departs from the rule in the whole considered spectral regions. Furthermore, the near-fault ground motions would lead to significantly greater inelastic displacement ratio and damage index than far-field ground motions and most of the practical design scenarios cannot survive the considered near-fault ground motion when the strength reduction factor of bridge is not less than 5.0. Finally, the spectrum formula is presented as a function of structural period, strength reduction factor, and various column design parameters for far-field and near-fault ground motions by means of the regression analysis of the computed spectra. And based on the developed spectrum formula, a design example of a bridge is presented to illustrate the proposed damage-based seismic design and evaluation method where the damage state of the bridge is used as the performance objective.Keywords: damage index, far-field, near-fault, reinforced concrete bridge, seismic design and evaluation
Procedia PDF Downloads 12513493 Low Temperature Solution Processed Solar Cell Based on ITO/PbS/PbS:Bi3+ Heterojunction
Authors: M. Chavez, H. Juarez, M. Pacio, O. Portillo
Abstract:
PbS chemical bath heterojunction sollar cells have shown significant improvements in performance. Here we demonstrate a solar cell based on the heterojunction formed between PbS layer and PbS:Bi3+ thin films that are deposited via solution process at 40°C. The device achieve an current density of 4 mA/cm2. The simple and low-cost deposition method of PbS:Bi3+ films is promising for the fabrication.Keywords: PbS doped, Bismuth, solar cell, thin films
Procedia PDF Downloads 55313492 Automatic Near-Infrared Image Colorization Using Synthetic Images
Authors: Yoganathan Karthik, Guhanathan Poravi
Abstract:
Colorizing near-infrared (NIR) images poses unique challenges due to the absence of color information and the nuances in light absorption. In this paper, we present an approach to NIR image colorization utilizing a synthetic dataset generated from visible light images. Our method addresses two major challenges encountered in NIR image colorization: accurately colorizing objects with color variations and avoiding over/under saturation in dimly lit scenes. To tackle these challenges, we propose a Generative Adversarial Network (GAN)-based framework that learns to map NIR images to their corresponding colorized versions. The synthetic dataset ensures diverse color representations, enabling the model to effectively handle objects with varying hues and shades. Furthermore, the GAN architecture facilitates the generation of realistic colorizations while preserving the integrity of dimly lit scenes, thus mitigating issues related to over/under saturation. Experimental results on benchmark NIR image datasets demonstrate the efficacy of our approach in producing high-quality colorizations with improved color accuracy and naturalness. Quantitative evaluations and comparative studies validate the superiority of our method over existing techniques, showcasing its robustness and generalization capability across diverse NIR image scenarios. Our research not only contributes to advancing NIR image colorization but also underscores the importance of synthetic datasets and GANs in addressing domain-specific challenges in image processing tasks. The proposed framework holds promise for various applications in remote sensing, medical imaging, and surveillance where accurate color representation of NIR imagery is crucial for analysis and interpretation.Keywords: computer vision, near-infrared images, automatic image colorization, generative adversarial networks, synthetic data
Procedia PDF Downloads 4413491 A Gamification Teaching Method for Software Measurement Process
Authors: Lennon Furtado, Sandro Oliveira
Abstract:
The importance of an effective measurement program lies in the ability to control and predict what can be measured. Thus, the measurement program has the capacity to provide bases in decision-making to support the interests of an organization. Therefore, it is only possible to apply for an effective measurement program with a team of software engineers well trained in the measurement area. However, the literature indicates that are few computer science courses that have in their program the teaching of the software measurement process. And even these, generally present only basic theoretical concepts of said process and little or no measurement in practice, which results in the student's lack of motivation to learn the measurement process. In this context, according to some experts in software process improvements, one of the most used approaches to maintaining the motivation and commitment to software process improvements program is the use of the gamification. Therefore, this paper aims to present a proposal of teaching the measurement process by gamification. Which seeks to improve student motivation and performance in the assimilation of tasks related to software measurement, by incorporating elements of games into the practice of measurement process, making it more attractive for learning. And as a way of validating the proposal will be made a comparison between two distinct groups of 20 students of Software Quality class, a control group, and an experiment group. The control group will be the students that will not make use of the gamification proposal to learn software measurement process, while the experiment group, will be the students that will make use of the gamification proposal to learn software measurement process. Thus, this paper will analyze the objective and subjective results of each group. And as objective result will be analyzed the student grade reached at the end of the course, and as subjective results will be analyzed a post-course questionnaire with the opinion of each student about the teaching method. Finally, this paper aims to prove or refute the following hypothesis: If the gamification proposal to teach software measurement process does appropriate motivate the student, in order to attribute the necessary competence to the practical application of the measurement process.Keywords: education, gamification, software measurement process, software engineering
Procedia PDF Downloads 31413490 Uniqueness of Fingerprint Biometrics to Human Dynasty: A Review
Authors: Siddharatha Sharma
Abstract:
With the advent of technology and machines, the role of biometrics in society is taking an important place for secured living. Security issues are the major concern in today’s world and continue to grow in intensity and complexity. Biometrics based recognition, which involves precise measurement of the characteristics of living beings, is not a new method. Fingerprints are being used for several years by law enforcement and forensic agencies to identify the culprits and apprehend them. Biometrics is based on four basic principles i.e. (i) uniqueness, (ii) accuracy, (iii) permanency and (iv) peculiarity. In today’s world fingerprints are the most popular and unique biometrics method claiming a social benefit in the government sponsored programs. A remarkable example of the same is UIDAI (Unique Identification Authority of India) in India. In case of fingerprint biometrics the matching accuracy is very high. It has been observed empirically that even the identical twins also do not have similar prints. With the passage of time there has been an immense progress in the techniques of sensing computational speed, operating environment and the storage capabilities and it has become more user convenient. Only a small fraction of the population may be unsuitable for automatic identification because of genetic factors, aging, environmental or occupational reasons for example workers who have cuts and bruises on their hands which keep fingerprints changing. Fingerprints are limited to human beings only because of the presence of volar skin with corrugated ridges which are unique to this species. Fingerprint biometrics has proved to be a high level authentication system for identification of the human beings. Though it has limitations, for example it may be inefficient and ineffective if ridges of finger(s) or palm are moist authentication becomes difficult. This paper would focus on uniqueness of fingerprints to the human beings in comparison to other living beings and review the advancement in emerging technologies and their limitations.Keywords: fingerprinting, biometrics, human beings, authentication
Procedia PDF Downloads 325