Search results for: modern information technologies
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 15395

Search results for: modern information technologies

7235 Photophysics of a Coumarin Molecule in Graphene Oxide Containing Reverse Micelle

Authors: Aloke Bapli, Debabrata Seth

Abstract:

Graphene oxide (GO) is the two-dimensional (2D) nanoscale allotrope of carbon having several physiochemical properties such as high mechanical strength, high surface area, strong thermal and electrical conductivity makes it an important candidate in various modern applications such as drug delivery, supercapacitors, sensors etc. GO has been used in the photothermal treatment of cancers and Alzheimer’s disease etc. The main idea to choose GO in our work is that it is a surface active molecule, it has a large number of hydrophilic functional groups such as carboxylic acid, hydroxyl, epoxide on its surface and in basal plane. So it can easily interact with organic fluorophores through hydrogen bonding or any other kind of interaction and easily modulate the photophysics of the probe molecules. We have used different spectroscopic techniques for our work. The Ground-state absorption spectra and steady-state fluorescence emission spectra were measured by using UV-Vis spectrophotometer from Shimadzu (model-UV-2550) and spectrofluorometer from Horiba Jobin Yvon (model-Fluoromax 4P) respectively. All the fluorescence lifetime and anisotropy decays were collected by using time-correlated single photon counting (TCSPC) setup from Edinburgh instrument (model: LifeSpec-II, U.K.). Herein, we described the photophysics of a hydrophilic molecule 7-(n,n׀-diethylamino) coumarin-3-carboxylic acid (7-DCCA) in the reverse micelles containing GO. It was observed that photophysics of dye is modulated in the presence of GO compared to photophysics of dye in the absence of GO inside the reverse micelles. Here we have reported the solvent relaxation and rotational relaxation time in GO containing reverse micelle and compare our work with normal reverse micelle system by using 7-DCCA molecule. Normal reverse micelle means reverse micelle in the absence of GO. The absorption maxima of 7-DCCA were blue shifted and emission maxima were red shifted in GO containing reverse micelle compared to normal reverse micelle. The rotational relaxation time in GO containing reverse micelle is always faster compare to normal reverse micelle. Solvent relaxation time, at lower w₀ values, is always slower in GO containing reverse micelle compare to normal reverse micelle and at higher w₀ solvent relaxation time of GO containing reverse micelle becomes almost equal to normal reverse micelle. Here emission maximum of 7-DCCA exhibit bathochromic shift in GO containing reverse micelles compared to that in normal reverse micelles because in presence of GO the polarity of the system increases, as polarity increases the emission maxima was red shifted an average decay time of GO containing reverse micelle is less than that of the normal reverse micelle. In GO containing reverse micelle quantum yield, decay time, rotational relaxation time, solvent relaxation time at λₑₓ=375 nm is always higher than λₑₓ=405 nm, shows the excitation wavelength dependent photophysics of 7-DCCA in GO containing reverse micelles.

Keywords: photophysics, reverse micelle, rotational relaxation, solvent relaxation

Procedia PDF Downloads 152
7234 Decoding Kinematic Characteristics of Finger Movement from Electrocorticography Using Classical Methods and Deep Convolutional Neural Networks

Authors: Ksenia Volkova, Artur Petrosyan, Ignatii Dubyshkin, Alexei Ossadtchi

Abstract:

Brain-computer interfaces are a growing research field producing many implementations that find use in different fields and are used for research and practical purposes. Despite the popularity of the implementations using non-invasive neuroimaging methods, radical improvement of the state channel bandwidth and, thus, decoding accuracy is only possible by using invasive techniques. Electrocorticography (ECoG) is a minimally invasive neuroimaging method that provides highly informative brain activity signals, effective analysis of which requires the use of machine learning methods that are able to learn representations of complex patterns. Deep learning is a family of machine learning algorithms that allow learning representations of data with multiple levels of abstraction. This study explores the potential of deep learning approaches for ECoG processing, decoding movement intentions and the perception of proprioceptive information. To obtain synchronous recording of kinematic movement characteristics and corresponding electrical brain activity, a series of experiments were carried out, during which subjects performed finger movements at their own pace. Finger movements were recorded with a three-axis accelerometer, while ECoG was synchronously registered from the electrode strips that were implanted over the contralateral sensorimotor cortex. Then, multichannel ECoG signals were used to track finger movement trajectory characterized by accelerometer signal. This process was carried out both causally and non-causally, using different position of the ECoG data segment with respect to the accelerometer data stream. The recorded data was split into training and testing sets, containing continuous non-overlapping fragments of the multichannel ECoG. A deep convolutional neural network was implemented and trained, using 1-second segments of ECoG data from the training dataset as input. To assess the decoding accuracy, correlation coefficient r between the output of the model and the accelerometer readings was computed. After optimization of hyperparameters and training, the deep learning model allowed reasonably accurate causal decoding of finger movement with correlation coefficient r = 0.8. In contrast, the classical Wiener-filter like approach was able to achieve only 0.56 in the causal decoding mode. In the noncausal case, the traditional approach reached the accuracy of r = 0.69, which may be due to the presence of additional proprioceptive information. This result demonstrates that the deep neural network was able to effectively find a representation of the complex top-down information related to the actual movement rather than proprioception. The sensitivity analysis shows physiologically plausible pictures of the extent to which individual features (channel, wavelet subband) are utilized during the decoding procedure. In conclusion, the results of this study have demonstrated that a combination of a minimally invasive neuroimaging technique such as ECoG and advanced machine learning approaches allows decoding motion with high accuracy. Such setup provides means for control of devices with a large number of degrees of freedom as well as exploratory studies of the complex neural processes underlying movement execution.

Keywords: brain-computer interface, deep learning, ECoG, movement decoding, sensorimotor cortex

Procedia PDF Downloads 172
7233 To Compare the Visual Outcome, Safety and Efficacy of Phacoemulsification and Small-Incision Cataract Surgery (SICS) at CEITC, Bangladesh

Authors: Rajib Husain, Munirujzaman Osmani, Mohammad Shamsal Islam

Abstract:

Purpose: To compare the safety, efficacy and visual outcome of phacoemulsification vs. manual small-incision cataract surgery (SICS) for the treatment of cataract in Bangladesh. Objectives: 1. To assess the Visual outcome after cataract surgery 2. To understand the post-operative complications and early rehabilitation 3. To identified which surgical procedure more attractive to the patients 4. To identify which surgical procedure is occurred fewer complications. 5. To find out the socio-economic and demographic characteristics of study patients Setting: Chittagong Eye Infirmary and Training Complex, Chittagong, Bangladesh. Design: Retrospective, randomised comparison of 300 patients with visually significant cataracts. Method: The present study was designed as a retrospective hospital-based research. The sample size was 300 and study period was from July, 2012 to July, 2013 and assigned randomly to receive either phacoemulsification or manual small-incision cataract surgery (SICS). Preoperative and post-operative data were collected through a well designed collection format. Three follow-up were done; i) during discharge ii) 1-3 weeks & iii) 4-11 weeks post operatively. All preoperative and surgical complications, uncorrected and best-corrected visual acuity (BCVA) and astigmatism were taken into consideration for comparison of outcome Result: Nearly 95% patients were more than 40 years of age. About 52% patients were female, and 48% were male. 52% (N=157) patients came to operate their first eye where 48% (N=143) patients were visited again to operate their second eye. Postoperatively, five eyes (3.33%) developed corneal oedema with >10 Descemets folds, and six eyes (4%) had corneal oedema with <10 Descemets folds for Phacoemulsification surgeries. For SICS surgeries, seven eyes (4.66%) developed corneal oedema with >10 Descemets folds and eight eyes (5.33%) had corneal oedema with < 10 descemets folds. However, both the uncorrected and corrected (4-11 weeks) visual acuities were better in the eyes that had phacoemulsification (p=0.02 and p=0.03), and there was less astigmatism (p=0.001) at 4-11 weeks in the eye that had phacoemulsification. Best-corrected visual acuity (BCVA) of final follow-up 95% (N=253) had a good outcome, borderline 3.10% (N=40) and poor outcome was 1.6% (N=7). The individual surgeon outcome were closer, 95% (BCVA) in SICS and 96% (BCVA) in Phacoemulsification at 4-11 weeks follow-up respectively. Conclusion: outcome of cataract surgery both Phacoemulsification and SICS in CEITC was more satisfactory according to who norms. Both Phacoemulsification and manual small-incision cataract surgery (SICS) shows excellent visual outcomes with low complication rates and good rehabilitation. Phacoemulsification is significantly faster, and modern technology based surgical procedure for cataract treatment.

Keywords: phacoemulsification, SICS, cataract, Bangladesh, visual outcome of SICS

Procedia PDF Downloads 347
7232 A Development of Personalized Edutainment Contents through Storytelling

Authors: Min Kyeong Cha, Ju Yeon Mun, Seong Baeg Kim

Abstract:

Recently, ‘play of learning’ became important and is emphasized as a useful learning tool. Therefore, interest in edutainment contents is growing. Storytelling is considered first as a method that improves the transmission of information and learner's interest when planning edutainment contents. In this study, we designed edutainment contents in the form of an adventure game that applies the storytelling method. This content provides questions and items constituted dynamically and reorganized learning contents through analysis of test results. It allows learners to solve various questions through effective iterative learning. As a result, the learners can reach mastery learning.

Keywords: storytelling, edutainment, mastery learning, computer operating principle

Procedia PDF Downloads 311
7231 Analysis of Complex Business Negotiations: Contributions from Agency-Theory

Authors: Jan Van Uden

Abstract:

The paper reviews classical agency-theory and its contributions to the analysis of complex business negotiations and gives an approach for the modification of the basic agency-model in order to examine the negotiation specific dimensions of agency-problems. By illustrating fundamental potentials for the modification of agency-theory in context of business negotiations the paper highlights recent empirical research that investigates agent-based negotiations and inter-team constellations. A general theoretical analysis of complex negotiation would be based on a two-level approach. First, the modification of the basic agency-model in order to illustrate the organizational context of business negotiations (i.e., multi-agent issues, common-agencies, multi-period models and the concept of bounded rationality). Second, the application of the modified agency-model on complex business negotiations to identify agency-problems and relating areas of risk in the negotiation process. The paper is placed on the first level of analysis – the modification. The method builds on the one hand on insights from behavior decision research (BRD) and on the other hand on findings from agency-theory as normative directives to the modification of the basic model. Through neoclassical assumptions concerning the fundamental aspects of agency-relationships in business negotiations (i.e., asymmetric information, self-interest, risk preferences and conflict of interests), agency-theory helps to draw solutions on stated worst-case-scenarios taken from the daily negotiation routine. As agency-theory is the only universal approach able to identify trade-offs between certain aspects of economic cooperation, insights obtained provide a deeper understanding of the forces that shape business negotiation complexity. The need for a modification of the basic model is illustrated by highlighting selected issues of business negotiations from agency-theory perspective: Negotiation Teams require a multi-agent approach under the condition that often decision-makers as superior-agents are part of the team. The diversity of competences and decision-making authority is a phenomenon that overrides the assumptions of classical agency-theory and varies greatly in context of certain forms of business negotiations. Further, the basic model is bound to dyadic relationships preceded by the delegation of decision-making authority and builds on a contractual created (vertical) hierarchy. As a result, horizontal dynamics within the negotiation team playing an important role for negotiation success are therefore not considered in the investigation of agency-problems. Also, the trade-off between short-term relationships within the negotiation sphere and the long-term relationships of the corporate sphere calls for a multi-period perspective taking into account the sphere-specific governance-mechanisms already established (i.e., reward and monitoring systems). Within the analysis, the implementation of bounded rationality is closely related to findings from BRD to assess the impact of negotiation behavior on underlying principal-agent-relationships. As empirical findings show, the disclosure and reservation of information to the agent affect his negotiation behavior as well as final negotiation outcomes. Last, in context of business negotiations, asymmetric information is often intended by decision-makers acting as superior-agents or principals which calls for a bilateral risk-approach to agency-relations.

Keywords: business negotiations, agency-theory, negotiation analysis, interteam negotiations

Procedia PDF Downloads 136
7230 Electrospray Plume Characterisation of a Single Source Cone-Jet for Micro-Electronic Cooling

Authors: M. J. Gibbons, A. J. Robinson

Abstract:

Increasing expectations on small form factor electronics to be more compact while increasing performance has driven conventional cooling technologies to a thermal management threshold. An emerging solution to this problem is electrospray (ES) cooling. ES cooling enables two phase cooling by utilising Coulomb forces for energy efficient fluid atomization. Generated charged droplets are accelerated to the grounded target surface by the applied electric field and surrounding gravitational force. While in transit the like charged droplets enable plume dispersion and inhibit droplet coalescence. If the electric field is increased in the cone-jet regime, a subsequent increase in the plume spray angle has been shown. Droplet segregation in the spray plume has been observed, with primary droplets in the plume core and satellite droplets positioned on the periphery of the plume. This segregation is facilitated by inertial and electrostatic effects. This result has been corroborated by numerous authors. These satellite droplets are usually more densely charged and move at a lower relative velocity to that of the spray core due to the radial decay of the electric field. Previous experimental research by Gomez and Tang has shown that the number of droplets deposited on the periphery can be up to twice that of the spray core. This result has been substantiated by a numerical models derived by Wilhelm et al., Oh et al. and Yang et al. Yang et al. showed from their numerical model, that by varying the extractor potential the dispersion radius of the plume also varies proportionally. This research aims to investigate this dispersion density and the role it plays in the local heat transfer coefficient profile (h) of ES cooling. This will be carried out for different extractor – target separation heights (H2), working fluid flow rates (Q), and extractor applied potential (V2). The plume dispersion will be recorded by spraying a 25 µm thick, joule heated steel foil and by recording the thermal footprint of the ES plume using a Flir A-40 thermal imaging camera. The recorded results will then be analysed by in-house developed MATLAB code.

Keywords: electronic cooling, electrospray, electrospray plume dispersion, spray cooling

Procedia PDF Downloads 392
7229 The Significance of Islamic Concept of Good Faith to Cure Flaws in Public International Law

Authors: M. A. H. Barry

Abstract:

The concept of Good faith (husn al-niyyah) and fair-dealing (Nadl) are the fundamental guiding elements in all contracts and other agreements under Islamic law. The preaching of Al-Quran and Prophet Muhammad’s (Peace Be upon Him) firmly command people to act in good faith in all dealings. There are several Quran verses and the Prophet’s saying which stressed the significance of dealing honestly and fairly in all transactions. Under the English law, the good faith is not considered a fundamental requirement for the formation of a legal contract. However, the concept of Good Faith in private contracts is recognized by the civil law system and in Article 7(1) of the Convention on International Sale of Goods (CISG-Vienna Convention-1980). It took several centuries for the international trading community to recognize the significance of the concept of good faith for the international sale of goods transactions. Nevertheless, the recognition of good faith in Civil law is only confined for the commercial contracts. Subsequently to the CISG, this concept has made inroads into the private international law. There are submissions in favour of applying the good faith concept to public international law based on tacit recognition by the international conventions and International Tribunals. However, under public international law the concept of good faith is not recognized as a source of rights or obligations. This weakens the spirit of the good faith concept, particularly when determining the international disputes. This also creates a fundamental flaw because the absence of good faith application means the breaches tainted by bad faith are tolerated. The objective of this research is to evaluate, examine and analyze the application of the concept of good faith in the modern laws and identify its limitation, in comparison with Islamic concept of good faith. This paper also identifies the problems and issues connected with the non-application of this concept to public international law. This research consists of three key components (1) the preliminary inquiry (2) subject analysis and discovery of research results, and (3) examining the challenging problems, and concluding with proposals. The preliminary inquiry is based on both the primary and secondary sources. The same sources are used for the subject analysis. This research also has both inductive and deductive features. The Islamic concept of good faith covers all situations and circumstances where the bad faith causes unfairness to the affected parties, especially the weak parties. Under the Islamic law, the concept of good faith is a source of rights and obligations as Islam prohibits any person committing wrongful or delinquent acts in any dealing whether in a private or public life. This rule is applicable not only for individuals but also for institutions, states, and international organizations. This paper explains how the unfairness is caused by non-recognition of the good faith concept as a source of rights or obligations under public international law and provides legal and non-legal reasons to show why the Islamic formulation is important.

Keywords: good faith, the civil law system, the Islamic concept, public international law

Procedia PDF Downloads 142
7228 Characterization of Kevlar 29 for Multifunction Applications

Authors: Doaa H. Elgohary, Dina M. Hamoda, S. Yahia

Abstract:

Technical textiles refer to textile materials that are engineered and designed to have specific functionalities and performance characteristics beyond their traditional use as apparel or upholstery fabrics. These textiles are usually developed for their unique properties such as strength, durability, flame retardancy, chemical resistance, waterproofing, insulation and other special properties. The development and use of technical textiles are constantly evolving, driven by advances in materials science, manufacturing technologies and the demand for innovative solutions in various industries. Kevlar 29 is a type of aramid fiber developed by DuPont. It is a high-performance material known for its exceptional strength and resistance to impact, abrasion, and heat. Kevlar 29 belongs to the Kevlar family, which includes different types of aramid fibers. Kevlar 29 is primarily used in applications that require strength and durability, such as ballistic protection, body armor, and body armor for military and law enforcement personnel. It is also used in the aerospace and automotive industries to reinforce composite materials, as well as in various industrial applications. Two different Kevlar samples were used coated with cooper lithium silicate (CLS); ten different mechanical and physical properties (weight, thickness, tensile strength, elongation, stiffness, air permeability, puncture resistance, thermal conductivity, stiffness, and spray test) were conducted to approve its functional performance efficiency. The influence of different mechanical properties was statistically analyzed using an independent t-test with a significant difference at P-value = 0.05. The radar plot was calculated and evaluated to determine the best-performing samples. The results of the independent t-test observed that all variables were significantly affected by yarn counts except water permeability, which has no significant effect. All properties were evaluated for samples 1 and 2, a radar chart was used to determine the best attitude for samples. The radar chart area was calculated, which shows that sample 1 recorded the best performance, followed by sample 2. The surface morphology of all samples and the coating materials was determined using a scanning electron microscope (SEM), also Fourier Transform Infrared Spectroscopy Measurement for the two samples.

Keywords: cooper lithium silicate, independent t-test, kevlar, technical textiles.

Procedia PDF Downloads 75
7227 Health Equity in Hard-to-Reach Rural Communities in Abia State, Nigeria: An Asset-Based Community Development Intervention to Influence Community Norms and Address the Social Determinants of Health in Hard-to-Reach Rural Communities

Authors: Chinasa U. Imo, Queen Chikwendu, Jonathan Ajuma, Mario Banuelos

Abstract:

Background: Sociocultural norms primarily influence the health-seeking behavior of populations in rural communities. In the Nkporo community, Abia State, Nigeria, their sociocultural perception of diseases runs counter to biomedical definitions, wherein they rely heavily on traditional medicine and practices. In a state where birth asphyxia and sepsis account for the significant causes of death for neonates, malaria leads to the causes of other mortalities, followed by common preventable diseases such as diarrhea, pneumonia, acute respiratory tract infection, malnutrition, and HIV/AIDS. Most local mothers attribute their health conditions and that of their children to witchcraft attacks, the hand of God, and ancestral underlining. This influences how they see antenatal and postnatal care, choice of place of accessing care and birth delivery, response to children's illnesses, immunization, and nutrition. Method: To implement a community health improvement program, we adopted an asset-based community development model to address health's normative and social determinants. The first step was to use a qualitative approach to conduct a community health needs baseline assessment, involving focus group discussions with twenty-five (25) youths aged 18-25, semi-structured interviews with ten (10) officers-in-charge of primary health centers, eight (8) ward health committee members, and nine (9) community leaders. Secondly, we designed an intervention program. Going forward, we will proceed with implementing and evaluating this program. Result: The priority needs identified by the communities were malaria, lack of clean drinking water, and the need for behavioral change information. The study also highlighted the significant influence of youths on their peers, family, and community as caregivers and information interpreters. Based on the findings, the NGO SieDi-Hub collaborated with the Abia State Ministry of Health, the State Primary Healthcare Agency, and Empower Next Generations to design a one-year "Community Health Youth Champions Pilot Program." Twenty (20) youths in the community were trained and equipped to champion a participatory approach to bridging the gap between access and delivery of primary healthcare, to adjust sociocultural norms to improve health equity for people in Nkporo community – with limited education, lack of access to health information, and quality healthcare facilities using an innovative community-led improvement approach. Conclusion: Youths play a vital role in achieving health equity, being a vulnerable population with significant influence. To ensure effective primary healthcare, strategies must include cultural humility. The asset-based community development model offers valuable tools, and this article will share ongoing lessons from the intervention's behavioral change strategies with young people.

Keywords: asset-based community development, community health, primary health systems strengthening, youth empowerment

Procedia PDF Downloads 83
7226 Literature Review on the Controversies and Changes in the Insanity Defense since the Wild Beast Standard in 1723 until the Federal Insanity Defense Reform Act of 1984

Authors: Jane E. Hill

Abstract:

Many variables led to the changes in the insanity defense since the Wild Beast Standard of 1723 until the Federal Insanity Defense Reform Act of 1984. The insanity defense is used in criminal trials and argued that the defendant is ‘not guilty by reason of insanity’ because the individual was unable to distinguish right from wrong during the time they were breaking the law. The issue that surrounds whether or not to use the insanity defense in the criminal court depends on the mental state of the defendant at the time the criminal act was committed. This leads us to the question of did the defendant know right from wrong when they broke the law? In 1723, The Wild Beast Test stated that to be exempted from punishment the individual is totally deprived of their understanding and memory and doth not know what they are doing. The Wild Beast Test became the standard in England for over seventy-five years. In 1800, James Hadfield attempted to assassinate King George III. He only made the attempt because he was having delusional beliefs. The jury and the judge gave a verdict of not guilty. However, to legal confine him; the Criminal Lunatics Act was enacted. Individuals that were deemed as ‘criminal lunatics’ and were given a verdict of not guilty would be taken into custody and not be freed into society. In 1843, the M'Naghten test required that the individual did not know the quality or the wrongfulness of the offense at the time they committed the criminal act(s). Daniel M'Naghten was acquitted on grounds of insanity. The M'Naghten Test is still a modern concept of the insanity defense used in many courts today. The Irresistible Impulse Test was enacted in the United States in 1887. The Irresistible Impulse Test suggested that offenders that could not control their behavior while they were committing a criminal act were not deterrable by the criminal sanctions in place; therefore no purpose would be served by convicting the offender. Due to the criticisms of the latter two contentions, the federal District of Columbia Court of Appeals ruled in 1954 to adopt the ‘product test’ by Sir Isaac Ray for insanity. The Durham Rule also known as the ‘product test’, stated an individual is not criminally responsible if the unlawful act was the product of mental disease or defect. Therefore, the two questions that need to be asked and answered are (1) did the individual have a mental disease or defect at the time they broke the law? and (2) was the criminal act the product of their disease or defect? The Durham courts failed to clearly define ‘mental disease’ or ‘product.’ Therefore, trial courts had difficulty defining the meaning of the terms and the controversy continued until 1972 when the Durham rule was overturned in most places. Therefore, the American Law Institute combined the M'Naghten test with the irresistible impulse test and The United States Congress adopted an insanity test for the federal courts in 1984.

Keywords: insanity defense, psychology law, The Federal Insanity Defense Reform Act of 1984, The Wild Beast Standard in 1723

Procedia PDF Downloads 138
7225 A Real-Time Bayesian Decision-Support System for Predicting Suspect Vehicle’s Intended Target Using a Sparse Camera Network

Authors: Payam Mousavi, Andrew L. Stewart, Huiwen You, Aryeh F. G. Fayerman

Abstract:

We present a decision-support tool to assist an operator in the detection and tracking of a suspect vehicle traveling to an unknown target destination. Multiple data sources, such as traffic cameras, traffic information, weather, etc., are integrated and processed in real-time to infer a suspect’s intended destination chosen from a list of pre-determined high-value targets. Previously, we presented our work in the detection and tracking of vehicles using traffic and airborne cameras. Here, we focus on the fusion and processing of that information to predict a suspect’s behavior. The network of cameras is represented by a directional graph, where the edges correspond to direct road connections between the nodes and the edge weights are proportional to the average time it takes to travel from one node to another. For our experiments, we construct our graph based on the greater Los Angeles subset of the Caltrans’s “Performance Measurement System” (PeMS) dataset. We propose a Bayesian approach where a posterior probability for each target is continuously updated based on detections of the suspect in the live video feeds. Additionally, we introduce the concept of ‘soft interventions’, inspired by the field of Causal Inference. Soft interventions are herein defined as interventions that do not immediately interfere with the suspect’s movements; rather, a soft intervention may induce the suspect into making a new decision, ultimately making their intent more transparent. For example, a soft intervention could be temporarily closing a road a few blocks from the suspect’s current location, which may require the suspect to change their current course. The objective of these interventions is to gain the maximum amount of information about the suspect’s intent in the shortest possible time. Our system currently operates in a human-on-the-loop mode where at each step, a set of recommendations are presented to the operator to aid in decision-making. In principle, the system could operate autonomously, only prompting the operator for critical decisions, allowing the system to significantly scale up to larger areas and multiple suspects. Once the intended target is identified with sufficient confidence, the vehicle is reported to the authorities to take further action. Other recommendations include a selection of road closures, i.e., soft interventions, or to continue monitoring. We evaluate the performance of the proposed system using simulated scenarios where the suspect, starting at random locations, takes a noisy shortest path to their intended target. In all scenarios, the suspect’s intended target is unknown to our system. The decision thresholds are selected to maximize the chances of determining the suspect’s intended target in the minimum amount of time and with the smallest number of interventions. We conclude by discussing the limitations of our current approach to motivate a machine learning approach, based on reinforcement learning in order to relax some of the current limiting assumptions.

Keywords: autonomous surveillance, Bayesian reasoning, decision support, interventions, patterns of life, predictive analytics, predictive insights

Procedia PDF Downloads 111
7224 Aerothermal Analysis of the Brazilian 14-X Hypersonic Aerospace Vehicle at Mach Number 7

Authors: Felipe J. Costa, João F. A. Martos, Ronaldo L. Cardoso, Israel S. Rêgo, Marco A. S. Minucci, Antonio C. Oliveira, Paulo G. P. Toro

Abstract:

The Prof. Henry T. Nagamatsu Laboratory of Aerothermodynamics and Hypersonics, at the Institute for Advanced Studies designed the Brazilian 14-X Hypersonic Aerospace Vehicle, which is a technological demonstrator endowed with two innovative technologies: waverider technology, to obtain lift from conical shockwave during the hypersonic flight; and uses hypersonic airbreathing propulsion system called scramjet that is based on supersonic combustion, to perform flights on Earth's atmosphere at 30 km altitude at Mach numbers 7 and 10. The scramjet is an aeronautical engine without moving parts that promote compression and deceleration of freestream atmospheric air at the inlet through the conical/oblique shockwaves generated during the hypersonic flight. During high speed flight, the shock waves and the viscous forces yield the phenomenon called aerodynamic heating, where this physical meaning is the friction between the fluid filaments and the body or compression at the stagnation regions of the leading edge that converts the kinetic energy into heat within a thin layer of air which blankets the body. The temperature of this layer increases with the square of the speed. This high temperature is concentrated in the boundary-layer, where heat will flow readily from the boundary-layer to the hypersonic aerospace vehicle structure. Fay and Riddell and Eckert methods are applied to the stagnation point and to the flat plate segments in order to calculate the aerodynamic heating. On the understanding of the aerodynamic heating it is important to analyze the heat conduction transfer to the 14-X waverider internal structure. ANSYS Workbench software provides the Thermal Numerical Analysis, using Finite Element Method of the 14-X waverider unpowered scramjet at 30 km altitude at Mach number 7 and 10 in terms of temperature and heat flux. Finally, it is possible to verify if the internal temperature complies with the requirements for embedded systems, and, if is necessary to do modifications on the structure in terms of wall thickness and materials.

Keywords: aerodynamic heating, hypersonic, scramjet, thermal analysis

Procedia PDF Downloads 447
7223 Analysis of Environmental Sustainability in Post- Earthquake Reconstruction : A Case of Barpak, Nepal

Authors: Sudikshya Bhandari, Jonathan K. London

Abstract:

Barpak in northern Nepal represents a unique identity expressed through the local rituals, values, lifeways and the styles of vernacular architecture. The traditional residential buildings and construction practices adopted by the dominant ethnic groups: Ghales and Gurungs, reflect environmental, social, cultural and economic concerns. However, most of these buildings did not survive the Gorkha earthquake in 2015 that made many residents skeptical about their strength to resist future disasters. This led Barpak residents to prefer modern housing designs primarily for the strength but additionally for convenience and access to earthquake relief funds. Post-earthquake reconstruction has transformed the cohesive community, developed over hundreds of years into a haphazard settlement with the imposition of externally-driven building models. Housing guidelines provided for the community reconstruction and earthquake resilience have been used as a singular template, similar to other communities on different geographical locations. The design and construction of these buildings do not take into account the local, historical, environmental, social, cultural and economic context of Barpak. In addition to the physical transformation of houses and the settlement, the consequences continue to develop challenges to sustainability. This paper identifies the major challenges for environmental sustainability with the construction of new houses in post-earthquake Barpak. Mixed methods such as interviews, focus groups, site observation, and documentation, and analysis of housing and neighborhood design have been used for data collection. The discernible changing situation of this settlement due to the new housing has included reduced climatic adaptation and thermal comfort, increased consumption of agricultural land and water, minimized use of local building materials, and an increase in energy demand. The research has identified that reconstruction housing practices happening in Barpak, while responding to crucial needs for disaster recovery and resilience, are also leading this community towards an unsustainable future. This study has also integrated environmental, social, cultural and economic parameters into an assessment framework that could be used to develop place-based design guidelines in the context of other post-earthquake reconstruction efforts. This framework seeks to minimize the unintended repercussions of unsustainable reconstruction interventions, support the vitality of vernacular architecture and traditional lifeways and respond to context-based needs in coordination with residents.

Keywords: earthquake, environment, reconstruction, sustainability

Procedia PDF Downloads 112
7222 Ingenious Eco-Technology for Transforming Food and Tanneries Waste into a Soil Bio-Conditioner and Fertilizer Product Used for Recovery and Enhancement of the Productive Capacity of the Soil

Authors: Petre Voicu, Mircea Oaida, Radu Vasiu, Catalin Gheorghiu, Aurel Dumitru

Abstract:

The present work deals with the way in which food and tobacco waste can be used in agriculture. As a result of the lack of efficient technologies for their recycling, we are currently faced with the appearance of appreciable quantities of residual organic residues that find their use only very rarely and only after long storage in landfills. The main disadvantages of long storage of organic waste are the unpleasant smell, the high content of pathogenic agents, and the high content in the water. The release of these enormous amounts imperatively demands the finding of solutions to ensure the avoidance of environmental pollution. The measure practiced by us consists of the processing of this waste in special installations, testing in pilot experimental perimeters, and later administration on agricultural lands without harming the quality of the soil, agricultural crops, and the environment. The current crisis of raw materials and energy also raises special problems in the field of organic waste valorization, an activity that takes place with low energy consumption. At the same time, their composition recommends them as useful secondary sources in agriculture. The transformation of food scraps and other residues concentrated organics thus acquires a new orientation, in which these materials are seen as important secondary resources. The utilization of food and tobacco waste in agriculture is also stimulated by the increasing lack of chemical fertilizers and the continuous increase in their price, under the conditions that the soil requires increased amounts of fertilizers in order to obtain high, stable, and profitable production. The need to maintain and increase the humus content of the soil is also taken into account, as an essential factor of its fertility, as a source and reserve of nutrients and microelements, as an important factor in increasing the buffering capacity of the soil, and the more reserved use of chemical fertilizers, improving the structure and permeability for water with positive effects on the quality of agricultural works and preventing the excess and/or deficit of moisture in the soil.

Keywords: ecology, soil, organic waste, fertility

Procedia PDF Downloads 76
7221 Factors Influencing the Usage of ERP in Enterprise Systems

Authors: Mohammad Reza Babaei, Sanaz Kamrani

Abstract:

The main problems That arise In adopting most Enterprise resources planning (ERP) strategies come from organizational, complex information systems like the ERP integrate the data of all business areas within the organization. The implementation of ERP is a difficult process as it involves different types of end users. Based on literature, we proposed a conceptual framework and examined it to find the effect of some of the individual, organizational, and technological factors on the usage of ERP and its impact on the end user. The results of the analysis suggest that computer self-efficacy, organizational support, training, and compatibility have a positive influence on ERP usage which in turn has significant influence on panoptic empowerment and individual performance.

Keywords: factor, influencing, enterprise, system

Procedia PDF Downloads 362
7220 Access to Natural Resources in the Cameroonian Part of the Logone Basin: A Driver and Mitigation Tool to Ethnical Conflicts

Authors: Bonguen Onouck Rolande Carole, Ndongo Barthelemy

Abstract:

The climate change effects on the Lake Chad, coupled with population growth, have pushed large masses of people of various origins towards the lower part of the lower Logonewatershed in search of the benefits of environmental services, causing pressure on the environment and its resources. Economic services are therefore threatened, and the decrease in resources contributes to the deterioration of the social wellbeing resulting to conflicts among/between local communities, immigrants, displaced people, and foreigners. This paper is an information contribution on ethnical conflicts drivers in the area and the provided local management mechanisms such can help mitigate present or future conflicts in similar areas. It also prints out the necessity to alleviate water access deficit and encourage good practices for the population wellbeing. In order to meet the objective, in 2018, through the interface of the World Bank-Cameroon project-PULCI, data were collected on the field directly by discussing with the population and visiting infrastructures, indirectly by a questionnaire survey. Two administrative divisions were chosen (Logoneet Chari, Mayo-Danay) in which targeted localities were Zina, Mazera, Lahai, Andirni near the Waza Park and Yagoua, Tekele, Pouss, respectively. Due to some sociocultural and religious reasons, some information were acquired through the traditional chiefs. A desk study analysis based on resources access and availability conflicts history, and management mechanism was done. As results, roots drivers of ethnical conflicts are struggles over natural resources access, and the possibility of conflicts increases as the scarcity and vulnerabilities persist, creating more sociocultural gaps and tensions. The mitigation mechanisms though fruitful, are limited. There is poor documentation on the topic, the resources management policies of this basin are unsuitable and ineffective for some. Therefore, the restoration of environmental and ecosystems, the mitigation of climate change effects, and food insecurity are the challenges that must be met to alleviate conflicts in these localities.

Keywords: ethnic, communities, conflicts, mitigation mechanisms, natural resources, logone basin

Procedia PDF Downloads 102
7219 Machine Learning in Patent Law: How Genetic Breeding Algorithms Challenge Modern Patent Law Regimes

Authors: Stefan Papastefanou

Abstract:

Artificial intelligence (AI) is an interdisciplinary field of computer science with the aim of creating intelligent machine behavior. Early approaches to AI have been configured to operate in very constrained environments where the behavior of the AI system was previously determined by formal rules. Knowledge was presented as a set of rules that allowed the AI system to determine the results for specific problems; as a structure of if-else rules that could be traversed to find a solution to a particular problem or question. However, such rule-based systems typically have not been able to generalize beyond the knowledge provided. All over the world and especially in IT-heavy industries such as the United States, the European Union, Singapore, and China, machine learning has developed to be an immense asset, and its applications are becoming more and more significant. It has to be examined how such products of machine learning models can and should be protected by IP law and for the purpose of this paper patent law specifically, since it is the IP law regime closest to technical inventions and computing methods in technical applications. Genetic breeding models are currently less popular than recursive neural network method and deep learning, but this approach can be more easily described by referring to the evolution of natural organisms, and with increasing computational power; the genetic breeding method as a subset of the evolutionary algorithms models is expected to be regaining popularity. The research method focuses on patentability (according to the world’s most significant patent law regimes such as China, Singapore, the European Union, and the United States) of AI inventions and machine learning. Questions of the technical nature of the problem to be solved, the inventive step as such, and the question of the state of the art and the associated obviousness of the solution arise in the current patenting processes. Most importantly, and the key focus of this paper is the problem of patenting inventions that themselves are developed through machine learning. The inventor of a patent application must be a natural person or a group of persons according to the current legal situation in most patent law regimes. In order to be considered an 'inventor', a person must actually have developed part of the inventive concept. The mere application of machine learning or an AI algorithm to a particular problem should not be construed as the algorithm that contributes to a part of the inventive concept. However, when machine learning or the AI algorithm has contributed to a part of the inventive concept, there is currently a lack of clarity regarding the ownership of artificially created inventions. Since not only all European patent law regimes but also the Chinese and Singaporean patent law approaches include identical terms, this paper ultimately offers a comparative analysis of the most relevant patent law regimes.

Keywords: algorithms, inventor, genetic breeding models, machine learning, patentability

Procedia PDF Downloads 106
7218 Assessment of Seeding and Weeding Field Robot Performance

Authors: Victor Bloch, Eerikki Kaila, Reetta Palva

Abstract:

Field robots are an important tool for enhancing efficiency and decreasing the climatic impact of food production. There exists a number of commercial field robots; however, since this technology is still new, the robot advantages and limitations, as well as methods for optimal using of robots, are still unclear. In this study, the performance of a commercial field robot for seeding and weeding was assessed. A research 2-ha sugar beet field with 0.5m row width was used for testing, which included robotic sowing of sugar beet and weeding five times during the first two months of the growing. About three and five percent of the field were used as untreated and chemically weeded control areas, respectively. The plant detection was based on the exact plant location without image processing. The robot was equipped with six seeding and weeding tools, including passive between-rows harrow hoes and active hoes cutting inside rows between the plants, and it moved with a maximal speed of 0.9 km/h. The robot's performance was assessed by image processing. The field images were collected by an action camera with a height of 2 m and a resolution 27M pixels installed on the robot and by a drone with a 16M pixel camera flying at 4 m height. To detect plants and weeds, the YOLO model was trained with transfer learning from two available datasets. A preliminary analysis of the entire field showed that in the areas treated by the robot, the weed average density varied across the field from 6.8 to 9.1 weeds/m² (compared with 0.8 in the chemically treated area and 24.3 in the untreated area), the weed average density inside rows was 2.0-2.9 weeds / m (compared with 0 on the chemically treated area), and the emergence rate was 90-95%. The information about the robot's performance has high importance for the application of robotics for field tasks. With the help of the developed method, the performance can be assessed several times during the growth according to the robotic weeding frequency. When it’s used by farmers, they can know the field condition and efficiency of the robotic treatment all over the field. Farmers and researchers could develop optimal strategies for using the robot, such as seeding and weeding timing, robot settings, and plant and field parameters and geometry. The robot producers can have quantitative information from an actual working environment and improve the robots accordingly.

Keywords: agricultural robot, field robot, plant detection, robot performance

Procedia PDF Downloads 73
7217 The Impact of Artificial Intelligence on Digital Factory

Authors: Mona Awad Wanis Gad

Abstract:

The method of factory making plans has changed loads, in particular, whilst it's miles approximately making plans the factory building itself. Factory making plans have the venture of designing merchandise, plants, tactics, organization, regions, and the construction of a factory. Ordinary restructuring is turning into greater essential for you to preserve the competitiveness of a manufacturing unit. Regulations in new regions, shorter lifestyle cycles of product and manufacturing era, in addition to a VUCA global (Volatility, Uncertainty, Complexity and Ambiguity) cause extra common restructuring measures inside a factory. A digital factory model is the planning foundation for rebuilding measures and turns into a critical device. Furthermore, digital building fashions are increasingly being utilized in factories to help facility management and manufacturing processes. First, exclusive styles of digital manufacturing unit fashions are investigated, and their residences and usabilities to be used instances are analyzed. Within the scope of research are point cloud fashions, building statistics fashions, photogrammetry fashions, and those enriched with sensor information are tested. It investigated which digital fashions permit a simple integration of sensor facts and in which the variations are. In the end, viable application areas of virtual manufacturing unit models are determined by a survey, and the respective digital manufacturing facility fashions are assigned to the application areas. Ultimately, an application case from upkeep is selected and implemented with the assistance of the best virtual factory version. It is shown how a completely digitalized preservation process can be supported by a digital manufacturing facility version by offering facts. Among different functions, the virtual manufacturing facility version is used for indoor navigation, facts provision, and display of sensor statistics. In summary, the paper suggests a structuring of virtual factory fashions that concentrates on the geometric representation of a manufacturing facility building and its technical facilities. A practical application case is proven and implemented. For that reason, the systematic selection of virtual manufacturing facility models with the corresponding utility cases is evaluated.

Keywords: augmented reality, digital factory model, factory planning, restructuring digital factory model, photogrammetry, factory planning, restructuring building information modeling, digital factory model, factory planning, maintenance

Procedia PDF Downloads 21
7216 Educating the Educators: Interdisciplinary Approaches to Enhance Science Teaching

Authors: Denise Levy, Anna Lucia C. H. Villavicencio

Abstract:

In a rapid-changing world, science teachers face considerable challenges. In addition to the basic curriculum, there must be included several transversal themes, which demand creative and innovative strategies to be arranged and integrated to traditional disciplines. In Brazil, nuclear science is still a controversial theme, and teachers themselves seem to be unaware of the issue, most often perpetuating prejudice, errors and misconceptions. This article presents the authors’ experience in the development of an interdisciplinary pedagogical proposal to include nuclear science in the basic curriculum, in a transversal and integrating way. The methodology applied was based on the analysis of several normative documents that define the requirements of essential learning, competences and skills of basic education for all schools in Brazil. The didactic materials and resources were developed according to the best practices to improve learning processes privileging constructivist educational techniques, with emphasis on active learning process, collaborative learning and learning through research. The material consists of an illustrated book for students, a book for teachers and a manual with activities that can articulate nuclear science to different disciplines: Portuguese, mathematics, science, art, English, history and geography. The content counts on high scientific rigor and articulate nuclear technology with topics of interest to society in the most diverse spheres, such as food supply, public health, food safety and foreign trade. Moreover, this pedagogical proposal takes advantage of the potential value of digital technologies, implementing QR codes that excite and challenge students of all ages, improving interaction and engagement. The expected results include the education of the educators for nuclear science communication in a transversal and integrating way, demystifying nuclear technology in a contextualized and significant approach. It is expected that the interdisciplinary pedagogical proposal contributes to improving attitudes towards knowledge construction, privileging reconstructive questioning, fostering a culture of systematic curiosity and encouraging critical thinking skills.

Keywords: science education, interdisciplinary learning, nuclear science, scientific literacy

Procedia PDF Downloads 128
7215 Extraction of Text Subtitles in Multimedia Systems

Authors: Amarjit Singh

Abstract:

In this paper, a method for extraction of text subtitles in large video is proposed. The video data needs to be annotated for many multimedia applications. Text is incorporated in digital video for the motive of providing useful information about that video. So need arises to detect text present in video to understanding and video indexing. This is achieved in two steps. First step is text localization and the second step is text verification. The method of text detection can be extended to text recognition which finds applications in automatic video indexing; video annotation and content based video retrieval. The method has been tested on various types of videos.

Keywords: video, subtitles, extraction, annotation, frames

Procedia PDF Downloads 596
7214 Use of Six-sigma Concept in Discrete Manufacturing Industry

Authors: Ignatio Madanhire, Charles Mbohwa

Abstract:

Efficiency in manufacturing is critical in raising the value of exports so as to gainfully trade on the regional and international markets. There seems to be increasing popularity of continuous improvement strategies availed to manufacturing entities, but this research study established that there has not been a similar popularity accorded to the Six Sigma methodology. Thus this work was conducted to investigate the applicability, effectiveness, usefulness, application and suitability of the Six Sigma methodology as a competitiveness option for discrete manufacturing entity. Development of Six-sigma center in the country with continuous improvement information would go a long way in benefiting the entire industry

Keywords: discrete manufacturing, six-sigma, continuous improvement, efficiency, competitiveness

Procedia PDF Downloads 457
7213 The Condition Testing of Damaged Plates Using Acoustic Features and Machine Learning

Authors: Kyle Saltmarsh

Abstract:

Acoustic testing possesses many benefits due to its non-destructive nature and practicality. There hence exists many scenarios in which using acoustic testing for condition testing shows powerful feasibility. A wealth of information is contained within the acoustic and vibration characteristics of structures, allowing the development meaningful features for the classification of their respective condition. In this paper, methods, results, and discussions are presented on the use of non-destructive acoustic testing coupled with acoustic feature extraction and machine learning techniques for the condition testing of manufactured circular steel plates subjected to varied levels of damage.

Keywords: plates, deformation, acoustic features, machine learning

Procedia PDF Downloads 331
7212 Characterising Performative Technological Innovation: Developing a Strategic Framework That Incorporates the Social Mechanisms That Promote Change within a Technological Environment

Authors: Joan Edwards, J. Lawlor

Abstract:

Technological innovation is frequently defined in terms of bringing a new invention to market through a relatively straightforward process of diffusion. In reality, this process is complex and non-linear in nature, and includes social and cognitive factors that influence the development of an emerging technology and its related market or environment. As recent studies contend technological trajectory is part of technological paradigms, which arise from the expectations and desires of industry agents and results in co-evolution, it may be realised that social factors play a major role in the development of a technology. It is conjectured that collective social behaviour is fuelled by individual motivations and expectations, which inform the possibilities and uses for a new technology. The individual outlook highlights the issues present at the micro-level of developing a technology. Accordingly, this may be zoomed out to realise how these embedded social structures, influence activities and expectations at a macro level and can ultimately strategically shape the development and use of a technology. These social factors rely on communication to foster the innovation process. As innovation may be defined as the implementation of inventions, technological change results from the complex interactions and feedback occurring within an extended environment. The framework presented in this paper, recognises that social mechanisms provide the basis for an iterative dialogue between an innovator, a new technology, and an environment - within which social and cognitive ‘identity-shaping’ elements of the innovation process occur. Identity-shaping characteristics indicate that an emerging technology has a performative nature that transforms, alters, and ultimately configures the environment to which it joins. This identity–shaping quality is termed as ‘performative’. This paper examines how technologies evolve within a socio-technological sphere and how 'performativity' facilitates the process. A framework is proposed that incorporates the performative elements which are identified as feedback, iteration, routine, expectations, and motivations. Additionally, the concept of affordances is employed to determine how the role of the innovator and technology change over time - constituting a more conducive environment for successful innovation.

Keywords: affordances, framework, performativity, strategic innovation

Procedia PDF Downloads 202
7211 Biodiesel Production from Edible Oil Wastewater Sludge with Bioethanol Using Nano-Magnetic Catalysis

Authors: Wighens Ngoie Ilunga, Pamela J. Welz, Olewaseun O. Oyekola, Daniel Ikhu-Omoregbe

Abstract:

Currently, most sludge from the wastewater treatment plants of edible oil factories is disposed to landfills, but landfill sites are finite and potential sources of environmental pollution. Production of biodiesel from wastewater sludge can contribute to energy production and waste minimization. However, conventional biodiesel production is energy and waste intensive. Generally, biodiesel is produced from the transesterification reaction of oils with alcohol (i.e., Methanol, ethanol) in the presence of a catalyst. Homogeneously catalysed transesterification is the conventional approach for large-scale production of biodiesel as reaction times are relatively short. Nevertheless, homogenous catalysis presents several challenges such as high probability of soap. The current study aimed to reuse wastewater sludge from the edible oil industry as a novel feedstock for both monounsaturated fats and bioethanol for the production of biodiesel. Preliminary results have shown that the fatty acid profile of the oilseed wastewater sludge is favourable for biodiesel production with 48% (w/w) monounsaturated fats and that the residue left after the extraction of fats from the sludge contains sufficient fermentable sugars after steam explosion followed by an enzymatic hydrolysis for the successful production of bioethanol [29% (w/w)] using a commercial strain of Saccharomyces cerevisiae. A novel nano-magnetic catalyst was synthesised from mineral processing alkaline tailings, mainly containing dolomite originating from cupriferous ores using a modified sol-gel. The catalyst elemental chemical compositions and structural properties were characterised by X-ray diffraction (XRD), scanning electron microscopy (SEM), Fourier transform infra-red (FTIR) and the BET for the surface area with 14.3 m²/g and 34.1 nm average pore diameter. The mass magnetization of the nano-magnetic catalyst was 170 emu/g. Both the catalytic properties and reusability of the catalyst were investigated. A maximum biodiesel yield of 78% was obtained, which dropped to 52% after the fourth transesterification reaction cycle. The proposed approach has the potential to reduce material costs, energy consumption and water usage associated with conventional biodiesel production technologies. It may also mitigate the impact of conventional biodiesel production on food and land security, while simultaneously reducing waste.

Keywords: biodiesel, bioethanol, edible oil wastewater sludge, nano-magnetism

Procedia PDF Downloads 140
7210 Diagrid Structural System

Authors: K. Raghu, Sree Harsha

Abstract:

The interrelationship between the technology and architecture of tall buildings is investigated from the emergence of tall buildings in late 19th century to the present. In the late 19th century early designs of tall buildings recognized the effectiveness of diagonal bracing members in resisting lateral forces. Most of the structural systems deployed for early tall buildings were steel frames with diagonal bracings of various configurations such as X, K, and eccentric. Though the historical research a filtering concept is developed original and remedial technology- through which one can clearly understand inter-relationship between the technical evolution and architectural esthetic and further stylistic transition buildings. Diagonalized grid structures – “diagrids” - have emerged as one of the most innovative and adaptable approaches to structuring buildings in this millennium. Variations of the diagrid system have evolved to the point of making its use non-exclusive to the tall building. Diagrid construction is also to be found in a range of innovative mid-rise steel projects. Contemporary design practice of tall buildings is reviewed and design guidelines are provided for new design trends. Investigated in depths are the behavioral characteristics and design methodology for diagrids structures, which emerge as a new direction in the design of tall buildings with their powerful structural rationale and symbolic architectural expression. Moreover, new technologies for tall building structures and facades are developed for performance enhancement through design integration, and their architectural potentials are explored. By considering the above data the analysis and design of 40-100 storey diagrids steel buildings is carried out using E-TABS software with diagrids of various angle to be found for entire building which will be helpful to reduce the steel requirement for the structure. The present project will have to undertake wind analysis, seismic analysis for lateral loads acting on the structure due to wind loads, earthquake loads, gravity loads. All structural members are designed as per IS 800-2007 considering all load combination. Comparison of results in terms of time period, top storey displacement and inter-storey drift to be carried out. The secondary effect like temperature variations are not considered in the design assuming small variation.

Keywords: diagrid, bracings, structural, building

Procedia PDF Downloads 382
7209 Application of the Material Point Method as a New Fast Simulation Technique for Textile Composites Forming and Material Handling

Authors: Amir Nazemi, Milad Ramezankhani, Marian Kӧrber, Abbas S. Milani

Abstract:

The excellent strength to weight ratio of woven fabric composites, along with their high formability, is one of the primary design parameters defining their increased use in modern manufacturing processes, including those in aerospace and automotive. However, for emerging automated preform processes under the smart manufacturing paradigm, complex geometries of finished components continue to bring several challenges to the designers to cope with manufacturing defects on site. Wrinklinge. g. is a common defectoccurring during the forming process and handling of semi-finished textile composites. One of the main reasons for this defect is the weak bending stiffness of fibers in unconsolidated state, causing excessive relative motion between them. Further challenges are represented by the automated handling of large-area fiber blanks with specialized gripper systems. For fabric composites forming simulations, the finite element (FE)method is a longstanding tool usedfor prediction and mitigation of manufacturing defects. Such simulations are predominately meant, not only to predict the onset, growth, and shape of wrinkles but also to determine the best processing condition that can yield optimized positioning of the fibers upon forming (or robot handling in the automated processes case). However, the need for use of small-time steps via explicit FE codes, facing numerical instabilities, as well as large computational time, are among notable drawbacks of the current FEtools, hindering their extensive use as fast and yet efficient digital twins in industry. This paper presents a novel woven fabric simulation technique through the application of the material point method (MPM), which enables the use of much larger time steps, facing less numerical instabilities, hence the ability to run significantly faster and efficient simulationsfor fabric materials handling and forming processes. Therefore, this method has the ability to enhance the development of automated fiber handling and preform processes by calculating the physical interactions with the MPM fiber models and rigid tool components. This enables the designers to virtually develop, test, and optimize their processes based on either algorithmicor Machine Learning applications. As a preliminary case study, forming of a hemispherical plain weave is shown, and the results are compared to theFE simulations, as well as experiments.

Keywords: material point method, woven fabric composites, forming, material handling

Procedia PDF Downloads 179
7208 Support of Knowledge Sharing in Manufacturing Companies: A Case Study

Authors: Zuzana Crhová, Karel Kolman, Drahomíra Pavelková

Abstract:

Knowledge is considered as an important asset which can help organizations to create competitive advantage. The necessity of taking care of these assets is more important in these days – in days of turbulent changes in business environment. Knowledge could facilitate adaption to constant changes. The aim of this paper is to describe how the knowledge sharing can be supported in the manufacturing companies. The methods of case studies and grounded theory were used to present information gained by carrying out semi-structured interviews. Results show that knowledge sharing is supported in very similar ways in respondent companies.

Keywords: case study, human resource management, knowledge, knowledge sharing

Procedia PDF Downloads 438
7207 Factor Influencing the Certification to ISO 9000:2008 among SME in Malaysia

Authors: Dolhadi Bin Zainudin

Abstract:

The study attempts to predict the relationship between influencing factors in the adoption of ISO 9000:2008 and to identify which how these factors play the main role in achieving ISO 9000 standard. A survey using structured questionnaire was employed. A total of 255 respondents from 255 small and medium enterprises participated in this study. With regards to influencing factors, a discriminant analysis was conducted and the results showed that three out of nine critical success factors is statistically significant between ISO 9000:2008 and non-ISO 9000 certified companies which are communication for quality, information and analysis and organizational culture.

Keywords: ISO 9000, quality management, factors, small and medium enterprise, Malaysia, influencing factors

Procedia PDF Downloads 331
7206 Dendrimer-Encapsulated N, Pt Co-Doped TiO₂ for the Photodegration of Contaminated Wastewater

Authors: S. K. M. Nzaba, H. H. Nyoni, B. Ntsendwana, B. B. Mamba, A. T. Kuvarega

Abstract:

Azo dye effluents, released into water bodies are not only toxic to the ecosystem but also pose a serious impact on human health due to the carcinogenic and mutagenic effects of the compounds present in the dye discharge. Conventional water treatment methods such as adsorption, flocculation/coagulation and biological processes are not effective in completely removing most of the dyes and their natural degradation by-products. Advanced oxidation processes (AOPs) have proven to be effective technologies for complete mineralization of these recalcitrant pollutants. Therefore, there is a need for new technology that can solve the problem. Thus, this study examined the photocatalytic degradation of an azo dye brilliant black (BB) using non-metal/metal codoped TiO₂. N, Pt co-doped TiO₂ photocatalysts were prepared by a modified sol-gel method using amine-terminated polyamidoamine dendrimer generation 0 (PAMAM G0), amine-terminated polyamidoamine dendrimer generation 1 ( PAMAM G1) and hyperbranched polyethyleneimine (HPEI) as templates and source of nitrogen. Structural, morphological, and textural properties were evaluated using scanning electron microscopy coupled to energy dispersive X-ray spectroscopy (SEM/EDX), high-resolution transmission electron microscopy (HRTEM), X-ray diffraction spectroscopy (XRD), X-ray photoelectron spectroscopy (XPS), thermal gravimetric analysis (TGA), Fourier- transform infrared (FTIR), Raman spectroscopy (RS), photoluminescence (PL) and ultra-violet /visible spectroscopy (UV-Vis). The synthesized photocatalysts exhibited lower band gap energies as compared to the Degussa P-25 revealing a red shift in band gap towards the visible light absorption region. Photocatalytic activity of N, Pt co-doped TiO₂ was measured by the reaction of photocatalytic degradation of brilliant black (BB) dye. The N, metal codoped TiO₂ containing 0.5 wt. % of the metal consisted mainly of the anatase phase as confirmed by XRD results of all three samples, with a particle size range of 13–30 nm. The particles were largely spherical and shifted the absorption edge well into the visible region. Band gap reduction was more pronounced for the N, Pt HPEI (Pt 0.5 wt. %) codoped TiO₂ compared to PAMAM G0 and PAMAM G1. Consequently, codoping led to an enhancement in the photocatalytic activity of the materials for the degradation of brilliant black (BB).

Keywords: codoped TiO₂, dendrimer, photodegradation, wastewater

Procedia PDF Downloads 164