Search results for: technology driven solutions
9419 Weapon-Being: Weaponized Design and Object-Oriented Ontology in Hypermodern Times
Authors: John Dimopoulos
Abstract:
This proposal attempts a refabrication of Heidegger’s classic thing-being and object-being analysis in order to provide better ontological tools for understanding contemporary culture, technology, and society. In his work, Heidegger sought to understand and comment on the problem of technology in an era of rampant innovation and increased perils for society and the planet. Today we seem to be at another crossroads in this course, coming after postmodernity, during which dreams and dangers of modernity augmented with critical speculations of the post-war era take shape. The new era which we are now living in, referred to as hypermodernity by researchers in various fields such as architecture and cultural theory, is defined by the horizontal implementation of digital technologies, cybernetic networks, and mixed reality. Technology today is rapidly approaching a turning point, namely the point of no return for humanity’s supervision over its creations. The techno-scientific civilization of the 21st century creates a series of problems, progressively more difficult and complex to solve and impossible to ignore, climate change, data safety, cyber depression, and digital stress being some of the most prevalent. Humans often have no other option than to address technology-induced problems with even more technology, as in the case of neuron networks, machine learning, and AI, thus widening the gap between creating technological artifacts and understanding their broad impact and possible future development. As all technical disciplines and particularly design, become enmeshed in a matrix of digital hyper-objects, a conceptual toolbox that allows us to handle the new reality becomes more and more necessary. Weaponized design, prevalent in many fields, such as social and traditional media, urban planning, industrial design, advertising, and the internet in general, hints towards an increase in conflicts. These conflicts between tech companies, stakeholders, and users with implications in politics, work, education, and production as apparent in the cases of Amazon workers’ strikes, Donald Trump’s 2016 campaign, Facebook and Microsoft data scandals, and more are often non-transparent to the wide public’s eye, thus consolidating new elites and technocratic classes and making the public scene less and less democratic. The new category proposed, weapon-being, is outlined in respect to the basic function of reducing complexity, subtracting materials, actants, and parameters, not strictly in favor of a humanistic re-orientation but in a more inclusive ontology of objects and subjects. Utilizing insights of Object-Oriented Ontology (OOO) and its schematization of technological objects, an outline for a radical ontology of technology is approached.Keywords: design, hypermodernity, object-oriented ontology, weapon-being
Procedia PDF Downloads 1529418 Ethiopian Women in Science, Technology, Engineering, and Mathematics Higher Education: Insights Gained Through an Onsite Culturally Embedded Workshop
Authors: Araceli Martinez Ortiz, Gillian U Bayne, Solomon Abraham
Abstract:
This paper describes research led by faculty from three American universities and four Ethiopian universities on the delivery of professional leadership development for early-career female Ethiopian university instructors in the Science, Technology, Engineering, and Mathematics (STEM) fields. The objective was to carry out a case study focused on the impact of an innovative intervention program designed to assist in the empowerment and leadership development related to teaching effectiveness, scholarly activity participation, and professional service participation by female instructors. This research was conducted utilizing a case study methodology for the weeklong intervention and a survey to capture the voices of the leadership program participants. The data regarding insights into the challenges and opportunities for women in these fields is presented. The research effort project expands upon existing linkages between universities to support professional development and research effort in this region of the world. Findings indicate the positive reception of this kind of professional development by the participating women. Survey data also reflects the educational technology and cultural challenges professional women in STEM education face in Ethiopia as well as the global challenges of balancing family expectations with career development.Keywords: women, STEM education, higher education, Ethiopia
Procedia PDF Downloads 679417 Nuclear Materials and Nuclear Security in India: A Brief Overview
Authors: Debalina Ghoshal
Abstract:
Nuclear security is the ‘prevention and detection of, and response to unauthorised removal, sabotage, unauthorised access, illegal transfer or other malicious acts involving nuclear or radiological material or their associated facilities.’ Ever since the end of Cold War, nuclear materials security has remained a concern for global security. However, with the increase in terrorist attacks not just in India especially, security of nuclear materials remains a priority. Therefore, India has made continued efforts to tighten its security on nuclear materials to prevent nuclear theft and radiological terrorism. Nuclear security is different from nuclear safety. Physical security is also a serious concern and India had been careful of the physical security of its nuclear materials. This is more so important since India is expanding its nuclear power capability to generate electricity for economic development. As India targets 60,000 MW of electricity production by 2030, it has a range of reactors to help it achieve its goal. These include indigenous Pressurised Heavy Water Reactors, now standardized at 700 MW per reactor Light Water Reactors, and the indigenous Fast Breeder Reactors that can generate more fuel for the future and enable the country to utilise its abundant thorium resource. Nuclear materials security can be enhanced through two important ways. One is through proliferation resistant technologies and diplomatic efforts to take non proliferation initiatives. The other is by developing technical means to prevent any leakage in nuclear materials in the hands of asymmetric organisations. New Delhi has already implemented IAEA Safeguards on their civilian nuclear installations. Moreover, the IAEA Additional Protocol has also been ratified by India in order to enhance its transparency of nuclear material and strengthen nuclear security. India is a party to the IAEA Conventions on Nuclear Safety and Security, and in particular the 1980 Convention on the Physical Protection of Nuclear Material and its amendment in 2005, Code of Conduct in Safety and Security of Radioactive Sources, 2006 which enables the country to provide for the highest international standards on nuclear and radiological safety and security. India's nuclear security approach is driven by five key components: Governance, Nuclear Security Practice and Culture, Institutions, Technology and International Cooperation. However, there is still scope for further improvements to strengthen nuclear materials and nuclear security. The NTI Report, ‘India’s improvement reflects its first contribution to the IAEA Nuclear Security Fund etc. in the future, India’s nuclear materials security conditions could be further improved by strengthening its laws and regulations for security and control of materials, particularly for control and accounting of materials, mitigating the insider threat, and for the physical security of materials during transport. India’s nuclear materials security conditions also remain adversely affected due to its continued increase in its quantities of nuclear material, and high levels of corruption among public officials.’ This paper would study briefly the progress made by India in nuclear and nuclear material security and the step ahead for India to further strengthen this.Keywords: India, nuclear security, nuclear materials, non proliferation
Procedia PDF Downloads 3529416 Rheological Properties and Thermal Performance of Suspensions of Microcapsules Containing Phase Change Materials
Authors: Vinh Duy Cao, Carlos Salas-Bringas, Anna M. Szczotok, Marianne Hiorth, Anna-Lena Kjøniksen
Abstract:
The increasing cost of energy supply for the purposes of heating and cooling creates a demand for more energy efficient buildings. Improved construction techniques and enhanced material technology can greatly reduce the energy consumption needed for the buildings. Microencapsulated phase change materials (MPCM) suspensions utilized as heat transfer fluids for energy storage and heat transfer applications provide promising potential solutions. A full understanding of the flow and thermal characteristics of microcapsule suspensions is needed to optimize the design of energy storage systems, in order to reduce the capital cost, system size, and energy consumption. The MPCM suspensions exhibited pseudoplastic and thixotropic behaviour, and significantly improved the thermal performance of the suspensions. Three different models were used to characterize the thixotropic behaviour of the MPCM suspensions: the second-order structural, kinetic model was found to give a better fit to the experimental data than the Weltman and Figoni-Shoemaker models. For all samples, the initial shear stress increased, and the breakdown rate accelerated significantly with increasing concentration. The thermal performance and rheological properties, especially the selection of rheological models, will be useful for developing the applications of microcapsules as heat transfer fluids in thermal energy storage system such as calculation of an optimum MPCM concentration, pumping power requirement, and specific power consumption. The effect of temperature on the shear thinning properties of the samples suggests that some of the phase change material is located outside the capsules, and contributes to agglomeration of the samples.Keywords: latent heat, microencapsulated phase change materials, pseudoplastic, suspension, thixotropic behaviour
Procedia PDF Downloads 2669415 Added Value of 3D Ultrasound Image Guided Hepatic Interventions by X Matrix Technology
Authors: Ahmed Abdel Sattar Khalil, Hazem Omar
Abstract:
Background: Image-guided hepatic interventions are integral to the management of infective and neoplastic liver lesions. Over the past decades, 2D ultrasound was used for guidance of hepatic interventions; with the recent advances in ultrasound technology, 3D ultrasound was used to guide hepatic interventions. The aim of this study was to illustrate the added value of 3D image guided hepatic interventions by x matrix technology. Patients and Methods: This prospective study was performed on 100 patients who were divided into two groups; group A included 50 patients who were managed by 2D ultrasonography probe guidance, and group B included 50 patients who were managed by 3D X matrix ultrasonography probe guidance. Thermal ablation was done for 70 patients, 40 RFA (20 by the 2D probe and 20 by the 3D x matrix probe), and 30 MWA (15 by the 2D probe and 15 by the 3D x matrix probe). Chemical ablation (PEI) was done on 20 patients (10 by the 2D probe and 10 by the 3D x matrix probe). Drainage of hepatic collections and biopsy from undiagnosed hepatic focal lesions was done on 10 patients (5 by the 2D probe and 5 by the 3D x matrix probe). Results: The efficacy of ultrasonography-guided hepatic interventions by 3D x matrix probe was higher than the 2D probe but not significantly higher, with a p-value of 0.705, 0.5428 for RFA, MWA respectively, 0.5312 for PEI, 0.2918 for drainage of hepatic collections and biopsy. The complications related to the use of the 3D X matrix probe were significantly lower than the 2D probe, with a p-value of 0.003. The timing of the procedure was shorter by the usage of 3D x matrix probe in comparison to the 2D probe with a p-value of 0.08,0.34 for RFA and PEI and significantly shorter for MWA, and drainage of hepatic collection, biopsy with a P-value of 0.02,0.001 respectively. Conclusions: 3D ultrasonography-guided hepatic interventions by  x matrix probe have better efficacy, less complication, and shorter time of procedure than the 2D ultrasonography-guided hepatic interventions.Keywords: 3D, X matrix, 2D, ultrasonography, MWA, RFA, PEI, drainage of hepatic collections, biopsy
Procedia PDF Downloads 959414 One Pot Synthesis of Cu–Ni–S/Ni Foam for the Simultaneous Removal and Detection of Norfloxacin
Authors: Xincheng Jiang, Yanyan An, Yaoyao Huang, Wei Ding, Manli Sun, Hong Li, Huaili Zheng
Abstract:
The residual antibiotics in the environment will pose a threat to the environment and human health. Thus, efficient removal and rapid detection of norfloxacin (NOR) in wastewater is very important. The main sources of NOR pollution are the agricultural, pharmaceutical industry and hospital wastewater. The total consumption of NOR in China can reach 5440 tons per year. It is found that neither animals nor humans can totally absorb and metabolize NOR, resulting in the excretion of NOR into the environment. Therefore, residual NOR has been detected in water bodies. The hazards of NOR in wastewater lie in three aspects: (1) the removal capacity of the wastewater treatment plant for NOR is limited (it is reported that the average removal efficiency of NOR in the wastewater treatment plant is only 68%); (2) NOR entering the environment will lead to the emergence of drug-resistant strains; (3) NOR is toxic to many aquatic species. At present, the removal and detection technologies of NOR are applied separately, which leads to a cumbersome operation process. The development of simultaneous adsorption-flocculation removal and FTIR detection of pollutants has three advantages: (1) Adsorption-flocculation technology promotes the detection technology (the enrichment effect on the material surface improves the detection ability); (2) The integration of adsorption-flocculation technology and detection technology reduces the material cost and makes the operation easier; (3) FTIR detection technology endows the water treatment agent with the ability of molecular recognition and semi-quantitative detection for pollutants. Thus, it is of great significance to develop a smart water treatment material with high removal capacity and detection ability for pollutants. This study explored the feasibility of combining NOR removal method with the semi-quantitative detection method. A magnetic Cu-Ni-S/Ni foam was synthesized by in-situ loading Cu-Ni-S nanostructures on the surface of Ni foam. The novelty of this material is the combination of adsorption-flocculation technology and semi-quantitative detection technology. Batch experiments showed that Cu-Ni-S/Ni foam has a high removal rate of NOR (96.92%), wide pH adaptability (pH=4.0-10.0) and strong ion interference resistance (0.1-100 mmol/L). According to the Langmuir fitting model, the removal capacity can reach 417.4 mg/g at 25 °C, which is much higher than that of other water treatment agents reported in most studies. Characterization analysis indicated that the main removal mechanisms are surface complexation, cation bridging, electrostatic attraction, precipitation and flocculation. Transmission FTIR detection experiments showed that NOR on Cu-Ni-S/Ni foam has easily recognizable FTIR fingerprints; the intensity of characteristic peaks roughly reflects the concentration information to some extent. This semi-quantitative detection method has a wide linear range (5-100 mg/L) and a low limit of detection (4.6 mg/L). These results show that Cu-Ni-S/Ni foam has excellent removal performance and semi-quantitative detection ability of NOR molecules. This paper provides a new idea for designing and preparing multi-functional water treatment materials to achieve simultaneous removal and semi-quantitative detection of organic pollutants in water.Keywords: adsorption-flocculation, antibiotics detection, Cu-Ni-S/Ni foam, norfloxacin
Procedia PDF Downloads 769413 Thermodynamic Modeling of Cryogenic Fuel Tanks with a Model-Based Inverse Method
Authors: Pedro A. Marques, Francisco Monteiro, Alessandra Zumbo, Alessia Simonini, Miguel A. Mendez
Abstract:
Cryogenic fuels such as Liquid Hydrogen (LH₂) must be transported and stored at extremely low temperatures. Without expensive active cooling solutions, preventing fuel boil-off over time is impossible. Hence, one must resort to venting systems at the cost of significant energy and fuel mass loss. These losses increase significantly in propellant tanks installed on vehicles, as the presence of external accelerations induces sloshing. Sloshing increases heat and mass transfer rates and leads to significant pressure oscillations, which might further trigger propellant venting. To make LH₂ economically viable, it is essential to minimize these factors by using advanced control techniques. However, these require accurate modelling and a full understanding of the tank's thermodynamics. The present research aims to implement a simple thermodynamic model capable of predicting the state of a cryogenic fuel tank under different operating conditions (i.e., filling, pressurization, fuel extraction, long-term storage, and sloshing). Since this model relies on a set of closure parameters to drive the system's transient response, it must be calibrated using experimental or numerical data. This work focuses on the former approach, wherein the model is calibrated through an experimental campaign carried out on a reduced-scale model of a cryogenic tank. The thermodynamic model of the system is composed of three control volumes: the ullage, the liquid, and the insulating walls. Under this lumped formulation, the governing equations are derived from energy and mass balances in each region, with mass-averaged properties assigned to each of them. The gas-liquid interface is treated as an infinitesimally thin region across which both phases can exchange mass and heat. This results in a coupled system of ordinary differential equations, which must be closed with heat and mass transfer coefficients between each control volume. These parameters are linked to the system evolution via empirical relations derived from different operating regimes of the tank. The derivation of these relations is carried out using an inverse method to find the optimal relations that allow the model to reproduce the available data. This approach extends classic system identification methods beyond linear dynamical systems via a nonlinear optimization step. Thanks to the data-driven assimilation of the closure problem, the resulting model accurately predicts the evolution of the tank's thermodynamics at a negligible computational cost. The lumped model can thus be easily integrated with other submodels to perform complete system simulations in real time. Moreover, by setting the model in a dimensionless form, a scaling analysis allowed us to relate the tested configurations to a representative full-size tank for naval applications. It was thus possible to compare the relative importance of different transport phenomena between the laboratory model and the full-size prototype among the different operating regimes.Keywords: destratification, hydrogen, modeling, pressure-drop, pressurization, sloshing, thermodynamics
Procedia PDF Downloads 929412 Risk Assessment of Building Information Modelling Adoption in Construction Projects
Authors: Amirhossein Karamoozian, Desheng Wu, Behzad Abbasnejad
Abstract:
Building information modelling (BIM) is a new technology to enhance the efficiency of project management in the construction industry. In addition to the potential benefits of this useful technology, there are various risks and obstacles to applying it in construction projects. In this study, a decision making approach is presented for risk assessment in BIM adoption in construction projects. Various risk factors of exerting BIM during different phases of the project lifecycle are identified with the help of Delphi method, experts’ opinions and related literature. Afterward, Shannon’s entropy and Fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Situation) are applied to derive priorities of the identified risk factors. Results indicated that lack of knowledge between professional engineers about workflows in BIM and conflict of opinions between different stakeholders are the risk factors with the highest priority.Keywords: risk, BIM, fuzzy TOPSIS, construction projects
Procedia PDF Downloads 2299411 The Effect of Data Integration to the Smart City
Authors: Richard Byrne, Emma Mulliner
Abstract:
Smart cities are a vision for the future that is increasingly becoming a reality. While a key concept of the smart city is the ability to capture, communicate, and process data that has long been produced through day-to-day activities of the city, much of the assessment models in place neglect this fact to focus on ‘smartness’ concepts. Although it is true technology often provides the opportunity to capture and communicate data in more effective ways, there are also human processes involved that are just as important. The growing importance with regards to the use and ownership of data in society can be seen by all with companies such as Facebook and Google increasingly coming under the microscope, however, why is the same scrutiny not applied to cities? The research area is therefore of great importance to the future of our cities here and now, while the findings will be of just as great importance to our children in the future. This research aims to understand the influence data is having on organisations operating throughout the smart cities sector and employs a mixed-method research approach in order to best answer the following question: Would a data-based evaluation model for smart cities be more appropriate than a smart-based model in assessing the development of the smart city? A fully comprehensive literature review concluded that there was a requirement for a data-driven assessment model for smart cities. This was followed by a documentary analysis to understand the root source of data integration to the smart city. A content analysis of city data platforms enquired as to the alternative approaches employed by cities throughout the UK and draws on best practice from New York to compare and contrast. Grounded in theory, the research findings to this point formulated a qualitative analysis framework comprised of: the changing environment influenced by data, the value of data in the smart city, the data ecosystem of the smart city and organisational response to the data orientated environment. The framework was applied to analyse primary data collected through the form of interviews with both public and private organisations operating throughout the smart cities sector. The work to date represents the first stage of data collection that will be built upon by a quantitative research investigation into the feasibility of data network effects in the smart city. An analysis into the benefits of data interoperability supporting services to the smart city in the areas of health and transport will conclude the research to achieve the aim of inductively forming a framework that can be applied to future smart city policy. To conclude, the research recognises the influence of technological perspectives in the development of smart cities to date and highlights this as a challenge to introduce theory applied with a planning dimension. The primary researcher has utilised their experience working in the public sector throughout the investigation to reflect upon what is perceived as a gap in practice of where we are today, to where we need to be tomorrow.Keywords: data, planning, policy development, smart cities
Procedia PDF Downloads 3119410 Understanding the Dynamics of Linker Histone Using Mathematical Modeling and FRAP Experiments
Authors: G. Carrero, C. Contreras, M. J. Hendzel
Abstract:
Linker histones or histones H1 are highly mobile nuclear proteins that regulate the organization of chromatin and limit DNA accessibility by binding to the chromatin structure (DNA and associated proteins). It is known that this binding process is driven by both slow (strong binding) and rapid (weak binding) interactions. However, the exact binding mechanism has not been fully described. Moreover, the existing models only account for one type of bound population that does not distinguish explicitly between the weakly and strongly bound proteins. Thus, we propose different systems of reaction-diffusion equations to describe explicitly the rapid and slow interactions during a FRAP (Fluorescence Recovery After Photobleaching) experiment. We perform a model comparison analysis to characterize the binding mechanism of histone H1 and provide new meaningful biophysical information on the kinetics of histone H1.Keywords: FRAP (Fluorescence Recovery After Photobleaching), histone H1, histone H1 binding kinetics, linker histone, reaction-diffusion equation
Procedia PDF Downloads 4419409 BlueVision: A Visual Tool for Exploring a Blockchain Network
Authors: Jett Black, Jordyn Godsey, Gaby G. Dagher, Steve Cutchin
Abstract:
Despite the growing interest in distributed ledger technology, many data visualizations of blockchain are limited to monotonous tabular displays or overly abstract graphical representations that fail to adequately educate individuals on blockchain components and their functionalities. To address these limitations, it is imperative to develop data visualizations that offer not only comprehensive insights into these domains but education as well. This research focuses on providing a conceptual understanding of the consensus process that underlies blockchain technology. This is accomplished through the implementation of a dynamic network visualization and an interactive educational tool called BlueVision. Further, a controlled user study is conducted to measure the effectiveness and usability of BlueVision. The findings demonstrate that the tool represents significant advancements in the field of blockchain visualization, effectively catering to the educational needs of both novice and proficient users.Keywords: blockchain, visualization, consensus, distributed network
Procedia PDF Downloads 629408 Smartphones as a Tool of Mobile Journalism in Saudi Arabia
Authors: Ahmed Deen
Abstract:
The introduction of the mobile devices which were equipped with internet access and a camera, as well as the messaging services, has become a major inspiration for the use of the mobile devices in the growth in the reporting of news. Mobile journalism (MOJO) was a creation of modern technology, especially the use of mobile technology for video journalism purposes. MOJO, thus, is the process by which information is collected and disseminated to society, through the use of mobile technology, and even the use of the tablets. This paper seeks to better understand the ethics of Saudi mobile journalists towards news coverage. Also, this study aims to explore the relationship between minimizing harms and truth-seeking efforts among Saudi mobile journalists. Three main ethics were targeted in this study, which are seek truth and report it, minimize harm, and being accountable. Diffusion of innovation theory applied to reach this study’s goals. The non- probability sampling approach, ‘Snowball Sampling’ was used to target 124 survey participants, an online survey via SurveyMonkey that was distributed through social media platforms as a web link. The code of ethics of the Society of Professional Journalists has applied as a scale in this study. This study found that the relationship between minimizing harm and truth-seeking efforts is significantly moderate among Saudi mobile journalists. Also, it is found that the level journalistic experiences and using smartphones to cover news are weakly and negatively related to the perceptions of mobile journalism among Saudi journalists, while Saudi journalists who use their smartphone to cover the news between 1-3 years, were the majority of participants (55 participants by 51.4%).Keywords: mobile journalism, Saudi journalism, smartphone, Saudi Arabia
Procedia PDF Downloads 1769407 An Approach for Modeling CMOS Gates
Authors: Spyridon Nikolaidis
Abstract:
A modeling approach for CMOS gates is presented based on the use of the equivalent inverter. A new model for the inverter has been developed using a simplified transistor current model which incorporates the nanoscale effects for the planar technology. Parametric expressions for the output voltage are provided as well as the values of the output and supply current to be compatible with the CCS technology. The model is parametric according the input signal slew, output load, transistor widths, supply voltage, temperature and process. The transistor widths of the equivalent inverter are determined by HSPICE simulations and parametric expressions are developed for that using a fitting procedure. Results for the NAND gate shows that the proposed approach offers sufficient accuracy with an average error in propagation delay about 5%.Keywords: CMOS gate modeling, inverter modeling, transistor current mode, timing model
Procedia PDF Downloads 4239406 Artificial Intelligence in Management Simulators
Authors: Nuno Biga
Abstract:
Artificial Intelligence (AI) has the potential to transform management into several impactful ways. It allows machines to interpret information to find patterns in big data and learn from context analysis, optimize operations, make predictions sensitive to each specific situation and support data-driven decision making. The introduction of an 'artificial brain' in organization also enables learning through complex information and data provided by those who train it, namely its users. The "Assisted-BIGAMES" version of the Accident & Emergency (A&E) simulator introduces the concept of a "Virtual Assistant" (VA) sensitive to context, that provides users useful suggestions to pursue the following operations such as: a) to relocate workstations in order to shorten travelled distances and minimize the stress of those involved; b) to identify in real time existing bottleneck(s) in the operations system so that it is possible to quickly act upon them; c) to identify resources that should be polyvalent so that the system can be more efficient; d) to identify in which specific processes it may be advantageous to establish partnership with other teams; and e) to assess possible solutions based on the suggested KPIs allowing action monitoring to guide the (re)definition of future strategies. This paper is built on the BIGAMES© simulator and presents the conceptual AI model developed and demonstrated through a pilot project (BIG-AI). Each Virtual Assisted BIGAME is a management simulator developed by the author that guides operational and strategic decision making, providing users with useful information in the form of management recommendations that make it possible to predict the actual outcome of different alternative management strategic actions. The pilot project developed incorporates results from 12 editions of the BIGAME A&E that took place between 2017 and 2022 at AESE Business School, based on the compilation of data that allows establishing causal relationships between decisions taken and results obtained. The systemic analysis and interpretation of data is powered in the Assisted-BIGAMES through a computer application called "BIGAMES Virtual Assistant" (VA) that players can use during the Game. Each participant in the VA permanently asks himself about the decisions he should make during the game to win the competition. To this end, the role of the VA of each team consists in guiding the players to be more effective in their decision making, through presenting recommendations based on AI methods. It is important to note that the VA's suggestions for action can be accepted or rejected by the managers of each team, as they gain a better understanding of the issues along time, reflect on good practice and rely on their own experience, capability and knowledge to support their own decisions. Preliminary results show that the introduction of the VA provides a faster learning of the decision-making process. The facilitator designated as “Serious Game Controller” (SGC) is responsible for supporting the players with further analysis. The recommended actions by the SGC may differ or be similar to the ones previously provided by the VA, ensuring a higher degree of robustness in decision-making. Additionally, all the information should be jointly analyzed and assessed by each player, who are expected to add “Emotional Intelligence”, an essential component absent from the machine learning process.Keywords: artificial intelligence, gamification, key performance indicators, machine learning, management simulators, serious games, virtual assistant
Procedia PDF Downloads 1059405 The Effect of Precipitation on Weed Infestation of Spring Barley under Different Tillage Conditions
Authors: J. Winkler, S. Chovancová
Abstract:
The article deals with the relation between rainfall in selected months and subsequent weed infestation of spring barley. The field experiment was performed at Mendel University agricultural enterprise in Žabčice, Czech Republic. Weed infestation was measured in spring barley vegetation in years 2004 to 2012. Barley was grown in three tillage variants: conventional tillage technology (CT), minimization tillage technology (MT), and no tillage (NT). Precipitation was recorded in one-day intervals. Monthly precipitation was calculated from the measured values in the months of October through to April. The technique of canonical correspondence analysis was applied for further statistical processing. 41 different species of weeds were found in the course of the 9-year monitoring period. The results clearly show that precipitation affects the incidence of most weed species in the selected months, but acts differently in the monitored variants of tillage technologies.Keywords: weeds, precipitation, tillage, weed infestation forecast
Procedia PDF Downloads 4999404 The Awareness of Computer Science Students Regarding the Security of Location Based Games
Authors: Jacques Barnard, Magda Huisman, Gunther R. Drevin
Abstract:
Rapid expansion and development in die mobile technology market has created an opportunity for users to participate in location based games. As a consequence of this fast expanding market and new technology, it is important to be aware of the implications this has on security. This paper measures the impact on the security awareness of games’ participants, as well as on that of students at university level with regards to their various stages of input in years of studying and gamer classification. This serves to provide insight into the matter as to discernible differences in the awareness of the security implications concerning these technologies. The data was accumulated via a web questionnaire that was to be completed yearly by students from respective year groups. Results signify a meaningful disparity in security awareness among students completing the varying study years and research. This awareness, however, does not always impact on gamers.Keywords: gamer classifications, location based games, location based data, security awareness
Procedia PDF Downloads 2929403 A Generic Metamodel for Dependability Analysis
Authors: Moomen Chaari, Wolfgang Ecker, Thomas Kruse, Bogdan-Andrei Tabacaru
Abstract:
In our daily life, we frequently interact with complex systems which facilitate our mobility, enhance our access to information, and sometimes help us recover from illnesses or diseases. The reliance on these systems is motivated by the established evaluation and assessment procedures which are performed during the different phases of the design and manufacturing flow. Such procedures are aimed to qualify the system’s delivered services with respect to their availability, reliability, safety, and other properties generally referred to as dependability attributes. In this paper, we propose a metamodel based generic characterization of dependability concepts and describe an automation methodology to customize this characterization to different standards and contexts. When integrated in concrete design and verification environments, the proposed methodology promotes the reuse of already available dependability assessment tools and reduces the costs and the efforts required to create consistent and efficient artefacts for fault injection or error simulation.Keywords: dependability analysis, model-driven development, metamodeling, code generation
Procedia PDF Downloads 4869402 Viscoelastic Characterization of Gelatin/Cellulose Nanocrystals Aqueous Bionanocomposites
Authors: Liliane Samara Ferreira Leite, Francys Kley Vieira Moreira, Luiz Henrique Capparelli Mattoso
Abstract:
The increasing environmental concern regarding the plastic pollution worldwide has stimulated the development of low-cost biodegradable materials. Proteins are renewable feedstocks that could be used to produce biodegradable plastics. Gelatin, for example, is a cheap film-forming protein extracted from animal skin and connective tissues of Brazilian Livestock residues; thus it has a good potential in low-cost biodegradable plastic production. However, gelatin plastics are limited in terms of mechanical and barrier properties. Cellulose nanocrystals (CNC) are efficient nanofillers that have been used to extend physical properties of polymers. This work was aimed at evaluating the reinforcing efficiency of CNC on gelatin films. Specifically, we have employed the continuous casting as the processing method for obtaining the gelatin/CNC bionanocomposites. This required a first rheological study for assessing the effect of gelatin-CNC and CNC-CNC interactions on the colloidal state of the aqueous bionanocomposite formulations. CNC were isolated from eucalyptus pulp by sulfuric acid hydrolysis (65 wt%) at 55 °C for 30 min. Gelatin was solubilized in ultra-pure water at 85°C for 20 min and then mixed with glycerol at 20 wt.% and CNC at 0.5 wt%, 1.0 wt% and 2.5 wt%. Rotational measurements were performed to determine linear viscosity (η) of bionanocomposite solutions, which increased with increasing CNC content. At 2.5 wt% CNC, η increased by 118% regarding the neat gelatin solution, which was ascribed to percolation CNC network formation. Storage modulus (G’) and loss modulus (G″) further determined by oscillatory tests revealed that a gel-like behavior was dominant in the bionanocomposite solutions (G’ > G’’) over a broad range of temperature (20 – 85 °C), particularly at 2.5 wt% CNC. These results confirm effective interactions in the aqueous gelatin-CNC bionanocomposites that could substantially increase the physical properties of the gelatin plastics. Tensile tests are underway to confirm this hypothesis. The authors would like to thank the Fapesp (process n 2016/03080-3) for support.Keywords: bionanocomposites, cellulose nanocrystals, gelatin, viscoelastic characterization
Procedia PDF Downloads 1509401 Integration of Rapid Generation Technology in Pulse Crop Breeding
Authors: Saeid H. Mobini, Monika Lulsdorf, Thomas D. Warkentin
Abstract:
The length of the breeding cycle from seed to seed is a limiting factor in the development of improved homozygous lines for breeding or recombinant inbred lines (RILs) for genetic analysis. The objective of this research was to accelerate the production of field pea RILs through application of rapid generation technology (RGT). RGT is based on the principle of growing miniature plants in an artificial medium under controlled conditions, and allowing them to produce a few flowers which develop seeds that are harvested prior to normal seed maturity. We aimed to maintain population size and genetic diversity in regeneration cycles. The effects of flurprimidol (a gibberellin synthesis inhibitor), plant density, hydroponic system, scheduled fertilizer applications, artificial light spectrum, photoperiod, and light/dark temperature were evaluated in the development of RILs from a cross between cultivars CDC Dakota and CDC Amarillo. The main goal was to accelerate flowering while reducing maintenance and space costs. In addition, embryo rescue of immature seeds was tested for shortening the seed fill period. Data collected over seven generations included plant height, the percentage of plant survival, flowering rate, seed setting rate, the number of seeds per plant, and time from seed to seed. Applying 0.6 µM flurprimidol reduced the internode length. Plant height was decreased to approximately 32 cm allowing for higher plant density without a delay in flowering and seed setting rate. The three light systems (T5 fluorescent bulbs, LEDs, and High Pressure Sodium +Metal-halide lamp) evaluated did not differ significantly in terms of flowering time in field pea. Collectively, the combination of 0.6 µM flurprimidol, 217 plant. m-2, 20 h photoperiod, 21/16 oC light/dark temperature in a hydroponic system with vermiculite substrate, applying scheduled fertilizer application based on growth stage, and 500 µmole.m-2.s-1 light intensity using T5 bulbs resulted in 100% of plants flowering within 34 ± 3 days and 96.5% of plants completed seed setting in 68.2 ± 3.6 days, i.e., 30-45 days/generation faster than conventional single seed descent (SSD) methods. These regeneration cycles were reproducible consistently. Hence, RGT could double (5.3) generations per year, using 3% occupying space, compared to SSD (2-3 generation/year). Embryo rescue of immature seeds at 7-8 mm stage, using commercial fertilizer solutions (Holland’s Secret™) showed seed setting rate of 95%, while younger embryos had lower germination rate. Mature embryos had a seed setting rate of 96.5% without either hormones or sugar added. So, considering the higher cost of embryo rescue using a procedure which requires skill, additional materials, and expenses, it could be removed from RGT with a further cost saving, and the process could be stopped between generations if required.Keywords: field pea, flowering, rapid regeneration, recombinant inbred lines, single seed descent
Procedia PDF Downloads 3629400 Adaptive Dehazing Using Fusion Strategy
Authors: M. Ramesh Kanthan, S. Naga Nandini Sujatha
Abstract:
The goal of haze removal algorithms is to enhance and recover details of scene from foggy image. In enhancement the proposed method focus into two main categories: (i) image enhancement based on Adaptive contrast Histogram equalization, and (ii) image edge strengthened Gradient model. Many circumstances accurate haze removal algorithms are needed. The de-fog feature works through a complex algorithm which first determines the fog destiny of the scene, then analyses the obscured image before applying contrast and sharpness adjustments to the video in real-time to produce image the fusion strategy is driven by the intrinsic properties of the original image and is highly dependent on the choice of the inputs and the weights. Then the output haze free image has reconstructed using fusion methodology. In order to increase the accuracy, interpolation method has used in the output reconstruction. A promising retrieval performance is achieved especially in particular examples.Keywords: single image, fusion, dehazing, multi-scale fusion, per-pixel, weight map
Procedia PDF Downloads 4659399 Profile of Cross-Reactivity Allergens Highlighted by Multiplex Technology “Alex Microchip Technique” in the Diagnosis of Type I Hypersensitivity
Authors: Gadiri Sabiha
Abstract:
Introduction: Current allergy diagnostic tools using Multiplex technology have made it possible to increase the efficiency of the search for specific IgE. This opportunity is provided by the newly developed “Alex Biochip”, consisting of a panel of 282 allergens in native and molecular form, a CCD inhibitor, and the potential for detecting cross-reactive allergens. We evaluated the performance of this technology in detecting cross-reactivity in previously explored patients. Material/Method: The sera of 39 patients presenting sensitization and polysensitization profiles were explored. The search for specific IgE is carried out by the Alex ® IgE Biochip, and the results are analyzed by nature and by molecular family of allergens using specific software. Results/Discussion: The analysis gave a particular profile of cross-reactivity allergens: 33% for the Ole e1 family, 31% for NPC2, 26% for storage proteins, 20% for Tropomyosin, 10% for LTPs, 10% for Arginine Kinase and 10% for Uteroglobin CCDs were absent in all patients. The “Ole e1” allergen is responsible for a pollen-pollen cross allergy. The storage proteins found and LTP are not species-specific, causing cross-pollen-food allergy. The nDer p2 of the NPC2 family is responsible for cross-reactivity between mite species. Conclusion: The cross-reactivities responsible for mixed syndromes at diagnosis in our patients were dominated by pollen-pollen and pollen-food syndromes. They allow the identification of severity factors linked to the prognosis and the best-adapted immunotherapy.Keywords: specific IgE, allergy, cross reactivity, molecular allergens
Procedia PDF Downloads 679398 The Effects of Mobile Communication on the Nigerian Populace
Authors: Chapman Eze Nnadozie
Abstract:
Communication, the activity of conveying information, remains a vital resource for the growth and development of any given society. Mobile communication, popularly known as global system for mobile communication (GSM) is a globally accepted standard for digital cellular communication. GSM, which is a wireless technology, remains the fastest growing communication means worldwide. Indeed, mobile phones have become a critical business tool and part of everyday life in both developed and developing countries. This study examines the effects of mobile communication on the Nigerian populace. The methodology used in this study is the survey research method with the main data collection tool as questionnaires. The questionnaires were administered to a total of seventy respondents in five cities across the country, namely: Aba, Enugu, Bauchi, Makurdi, and Lagos. The result reveals that though there is some quality of service issues, mobile communication has very significant positive efforts on the economic and social development of the Nigerian populace.Keywords: effect, mobile communication, populace, GSM, wireless technology, mobile phone
Procedia PDF Downloads 2719397 Foundation of the Information Model for Connected-Cars
Authors: Hae-Won Seo, Yong-Gu Lee
Abstract:
Recent progress in the next generation of automobile technology is geared towards incorporating information technology into cars. Collectively called smart cars are bringing intelligence to cars that provides comfort, convenience and safety. A branch of smart cars is connected-car system. The key concept in connected-cars is the sharing of driving information among cars through decentralized manner enabling collective intelligence. This paper proposes a foundation of the information model that is necessary to define the driving information for smart-cars. Road conditions are modeled through a unique data structure that unambiguously represent the time variant traffics in the streets. Additionally, the modeled data structure is exemplified in a navigational scenario and usage using UML. Optimal driving route searching is also discussed using the proposed data structure in a dynamically changing road conditions.Keywords: connected-car, data modeling, route planning, navigation system
Procedia PDF Downloads 3749396 Delivering User Context-Sensitive Service in M-Commerce: An Empirical Assessment of the Impact of Urgency on Mobile Service Design for Transactional Apps
Authors: Daniela Stephanie Kuenstle
Abstract:
Complex industries such as banking or insurance experience slow growth in mobile sales. While today’s mobile applications are sophisticated and enable location based and personalized services, consumers prefer online or even face-to-face services to complete complex transactions. A possible reason for this reluctance is that the provided service within transactional mobile applications (apps) does not adequately correspond to users’ needs. Therefore, this paper examines the impact of the user context on mobile service (m-service) in m-commerce. Motivated by the potential which context-sensitive m-services hold for the future, the impact of temporal variations as a dimension of user context, on m-service design is examined. In particular, the research question asks: Does consumer urgency function as a determinant of m-service composition in transactional apps by moderating the relation between m-service type and m-service success? Thus, the aim is to explore the moderating influence of urgency on m-service types, which includes Technology Mediated Service and Technology Generated Service. While mobile applications generally comprise features of both service types, this thesis discusses whether unexpected urgency changes customer preferences for m-service types and how this consequently impacts the overall m-service success, represented by purchase intention, loyalty intention and service quality. An online experiment with a random sample of N=1311 participants was conducted. Participants were divided into four treatment groups varying in m-service types and urgency level. They were exposed to two different urgency scenarios (high/ low) and two different app versions conveying either technology mediated or technology generated service. Subsequently, participants completed a questionnaire to measure the effectiveness of the manipulation as well as the dependent variables. The research model was tested for direct and moderating effects of m-service type and urgency on m-service success. Three two-way analyses of variance confirmed the significance of main effects, but demonstrated no significant moderation of urgency on m-service types. The analysis of the gathered data did not confirm a moderating effect of urgency between m-service type and service success. Yet, the findings propose an additive effects model with the highest purchase and loyalty intention for Technology Generated Service and high urgency, while Technology Mediated Service and low urgency demonstrate the strongest effect for service quality. The results also indicate an antagonistic relation between service quality and purchase intention depending on the level of urgency. Although a confirmation of the significance of this finding is required, it suggests that only service convenience, as one dimension of mobile service quality, delivers conditional value under high urgency. This suggests a curvilinear pattern of service quality in e-commerce. Overall, the paper illustrates the complex interplay of technology, user variables, and service design. With this, it contributes to a finer-grained understanding of the relation between m-service design and situation dependency. Moreover, the importance of delivering situational value with apps depending on user context is emphasized. Finally, the present study raises the demand to continue researching the impact of situational variables on m-service design in order to develop more sophisticated m-services.Keywords: mobile consumer behavior, mobile service design, mobile service success, self-service technology, situation dependency, user-context sensitivity
Procedia PDF Downloads 2689395 Genetic Algorithm for In-Theatre Military Logistics Search-and-Delivery Path Planning
Authors: Jean Berger, Mohamed Barkaoui
Abstract:
Discrete search path planning in time-constrained uncertain environment relying upon imperfect sensors is known to be hard, and current problem-solving techniques proposed so far to compute near real-time efficient path plans are mainly bounded to provide a few move solutions. A new information-theoretic –based open-loop decision model explicitly incorporating false alarm sensor readings, to solve a single agent military logistics search-and-delivery path planning problem with anticipated feedback is presented. The decision model consists in minimizing expected entropy considering anticipated possible observation outcomes over a given time horizon. The model captures uncertainty associated with observation events for all possible scenarios. Entropy represents a measure of uncertainty about the searched target location. Feedback information resulting from possible sensor observations outcomes along the projected path plan is exploited to update anticipated unit target occupancy beliefs. For the first time, a compact belief update formulation is generalized to explicitly include false positive observation events that may occur during plan execution. A novel genetic algorithm is then proposed to efficiently solve search path planning, providing near-optimal solutions for practical realistic problem instances. Given the run-time performance of the algorithm, natural extension to a closed-loop environment to progressively integrate real visit outcomes on a rolling time horizon can be easily envisioned. Computational results show the value of the approach in comparison to alternate heuristics.Keywords: search path planning, false alarm, search-and-delivery, entropy, genetic algorithm
Procedia PDF Downloads 3609394 Technology Computer Aided Design Simulation of Space Charge Limited Conduction in Polycrystalline Thin Films
Authors: Kunj Parikh, S. Bhattacharya, V. Natarajan
Abstract:
TCAD numerical simulation is one of the most tried and tested powerful tools for designing devices in semiconductor foundries worldwide. It has also been used to explain conduction in organic thin films where the processing temperature is often enough to make homogeneous samples (often imperfect, but homogeneously imperfect). In this report, we have presented the results of TCAD simulation in multi-grain thin films. The work has addressed the inhomogeneity in one dimension, but can easily be extended to two and three dimensions. The effect of grain boundaries has mainly been approximated as barriers located at the junction between two adjacent grains. The effect of the value of grain boundary barrier, the bulk traps, and the measurement temperature have been investigated.Keywords: polycrystalline thin films, space charge limited conduction, Technology Computer-Aided Design (TCAD) simulation, traps
Procedia PDF Downloads 2149393 Small Scale Stationary and Mobile Production of Biodiesel
Authors: Muhammad Yusuf Abduh, Robert Manurung, Hero Jan Heeres
Abstract:
Biodiesel can be produced in small scale mobile units which are designed with local input and demand. Unlike the typical biodiesel production plants, mobile biodiesel unit consiss of a biodiesel production facility placed inside a standard cargo container and mounted on a truck so that it can be transported to a region near the location of raw materials. In this paper, we review the existing concept and unit for the development of community-scale and mobile production of biodiesel. This includes the main reactor technology to produce biodiesel as well as the pre-treatment prior to the reaction unit. The pre-treatment includes the oil-expeller unit to obtain oil from the oilseeds as well as the quality control of the oil before it enters the reaction unit. This paper also discusses the post-treatment after the production of biodiesel. It includes the refining and purification of biodiesel to meet the product specification set by the biodiesel industry.Keywords: biodiesel, community scale, mobile biodiesel unit, reactor technology
Procedia PDF Downloads 2369392 An Investigation Enhancing E-Voting Application Performance
Authors: Aditya Verma
Abstract:
E-voting using blockchain provides us with a distributed system where data is present on each node present in the network and is reliable and secure too due to its immutability property. This work compares various blockchain consensus algorithms used for e-voting applications in the past, based on performance and node scalability, and chooses the optimal one and improves on one such previous implementation by proposing solutions for the loopholes of the optimally working blockchain consensus algorithm, in our chosen application, e-voting.Keywords: blockchain, parallel bft, consensus algorithms, performance
Procedia PDF Downloads 1679391 A Comparative Study of Secondary Education Curriculum of Iran with Some Developed Countries in the World
Authors: Seyyed Abdollah Hojjati
Abstract:
Review in the areas of secondary education; it is a kind of comparative requires very careful scrutiny in educational structure of different countries,In upcoming review of the basic structure of our educational system in Islamic republic of Iran with somedeveloped countries in the world, Analyzing of strengthsand weaknesses in main areas, A simple review of the above methods do not consider this particular community, Modifythe desired result can be expressed in the secondary school curriculum and academic guidance of under graduate students in a skill-driven and creativity growth, It not just improves the health and dynamism of this period and increases the secondary teachers' authority and the relationship between teacher and student in this course will be meaningful and attractive, But with reduced of false prosperity and guaranteed institutes and quizzes, areas will be provided for students to enjoy the feeling ofthe psychological comfort and to have the highest growth of creativity .Keywords: comparative, curriculum of secondary education, curriculum, Iran, developed countries
Procedia PDF Downloads 4939390 Sludge Densification: Emerging and Efficient Way to Look at Biological Nutrient Removal Treatment
Authors: Raj Chavan
Abstract:
Currently, there are over 14,500 Water Resource Recovery Facilities (WRRFs) in the United States, with ~35% of them having some type of nutrient limits in place. These WRRFs account for about 1% of overall power demand and 2% of total greenhouse gas emissions (GHG) in the United States and contribute for 10 to 15% of the overall nutrient load to surface rivers in the United States. The evolution of densification technologies toward more compact and energy-efficient nutrient removal processes has been impacted by a number of factors. Existing facilities that require capacity expansion or biomass densification for higher treatability within the same footprint are being subjected to more stringent requirements relating to nutrient removal prior to surface water discharge. Densification of activated sludge has received recent widespread interest as a means for achieving process intensification and nutrient removal at WRRFs. At the core of the technology are the aerobic sludge granules where the biological processes occur. There is considerable interest in the prospect of producing granular sludge in continuous (or traditional) activated sludge processes (CAS) or densification of biomass by moving activated sludge flocs to a denser aggregate of biomass as a highly effective technique of intensification. This presentation will provide a fundamental understanding of densification by presenting insights and practical issues. The topics that will be discussed include methods used to generate and retain densified granules; the mechanisms that allow biological flocs to densify; the role that physical selectors play in the densification of biological flocs; some viable ways for managing biological flocs that have become densified; effects of physical selection design parameters on the retention of densified biological flocs and finally some operational solutions for customizing the flocs and granules required to meet performance and capacity targets. In addition, it will present some case studies where biological and physical parameters were used to generate aerobic granular sludge in the continuous flow system.Keywords: densification, aerobic granular sludge, nutrient removal, intensification
Procedia PDF Downloads 186