Search results for: Kohat university of science and technology
8394 Information and Communication Technology Application in the Face of COVID-19 Pandemic in Effective Service Delivery in Schools
Authors: Odigie Veronica
Abstract:
The paper focused on the application of Information and Communication Technology (ICT) in effective service delivery in view of the ongoing COVID-19 experience. It adopted the exploratory research method with three research objectives captured. Consequently, the objectives were to ascertain the meaning of online education, understand the concept of COVID-19 and to determine the relevance of online education in effective service delivery in institutions of learning. It is evident from the findings that through ICT, online mode of learning can be adopted in schools which helps greatly in promoting continual education. Online mode of education is practiced online; it brings both the teacher and learners from different places together, without any physical boundary/contact (at least 75%); and has helped greatly in human development in countries where it has been practiced. It is also a welcome development owing to its many benefits such as exposure to digital learning, having access to works of great teachers and educationists such as Socrates, Plato, Dewey, R.S. Peters, J. J. Rosseau, Nnamdi Azikwe, Carol Gilligan, J. I. Omoregbe, Jane Roland Martin, Jean Piaget, among others; and the facilitation of uninterrupted learning for class promotion and graduation of students. Developing the learners all round is part of human development which helps in developing a nation. These and many more are some benefits online education offers which make ICT very relevant in our contemporary societyKeywords: online education, COVID-19 pandemic, effective service delivery, human development
Procedia PDF Downloads 1008393 Proposal of a Rectenna Built by Using Paper as a Dielectric Substrate for Electromagnetic Energy Harvesting
Authors: Ursula D. C. Resende, Yan G. Santos, Lucas M. de O. Andrade
Abstract:
The recent and fast development of the internet, wireless, telecommunication technologies and low-power electronic devices has led to an expressive amount of electromagnetic energy available in the environment and the smart applications technology expansion. These applications have been used in the Internet of Things devices, 4G and 5G solutions. The main feature of this technology is the use of the wireless sensor. Although these sensors are low-power loads, their use imposes huge challenges in terms of an efficient and reliable way for power supply in order to avoid the traditional battery. The radio frequency based energy harvesting technology is especially suitable to wireless power sensors by using a rectenna since it can be completely integrated into the distributed hosting sensors structure, reducing its cost, maintenance and environmental impact. The rectenna is an equipment composed of an antenna and a rectifier circuit. The antenna function is to collect as much radio frequency radiation as possible and transfer it to the rectifier, which is a nonlinear circuit, that converts the very low input radio frequency energy into direct current voltage. In this work, a set of rectennas, mounted on a paper substrate, which can be used for the inner coating of buildings and simultaneously harvest electromagnetic energy from the environment, is proposed. Each proposed individual rectenna is composed of a 2.45 GHz patch antenna and a voltage doubler rectifier circuit, built in the same paper substrate. The antenna contains a rectangular radiator element and a microstrip transmission line that was projected and optimized by using the Computer Simulation Software (CST) in order to obtain values of S11 parameter below -10 dB in 2.45 GHz. In order to increase the amount of harvested power, eight individual rectennas, incorporating metamaterial cells, were connected in parallel forming a system, denominated Electromagnetic Wall (EW). In order to evaluate the EW performance, it was positioned at a variable distance from the internet router, and a 27 kΩ resistive load was fed. The results obtained showed that if more than one rectenna is associated in parallel, enough power level can be achieved in order to feed very low consumption sensors. The 0.12 m2 EW proposed in this work was able to harvest 0.6 mW from the environment. It also observed that the use of metamaterial structures provide an expressive growth in the amount of electromagnetic energy harvested, which was increased from 0. 2mW to 0.6 mW.Keywords: electromagnetic energy harvesting, metamaterial, rectenna, rectifier circuit
Procedia PDF Downloads 1678392 The Feminine Disruption of Speech and Refounding of Discourse: Kristeva’s Semiotic Chora and Psychoanalysis
Authors: Kevin Klein-Cardeña
Abstract:
For Julia Kristeva, contra Lacan, the instinctive body refuses to go away within discourse. Neither is the pre-Oedipal stage of maternal fusion vanquished by the emergence of language and with it, the law of the father. On the contrary, Kristeva argues, the pre-symbolic ambivalently haunts the society of speech, simultaneously animating and threatening the very foundations of signification. Kristeva invents the term “the semiotic” to refer to this continual breaking-through of the material unconscious onto the scene of meaning. This presentation examines Kristeva’s semiotic as a theoretical gesture that itself is a disruption of discourse, re-presenting the ‘return of the repressed’ body in theory—-the breaking-through of the unconscious onto the science of meaning. Faced with linguistic theories concerned with abstract sign-systems as well as Lacanian doctrine privileging the linguistic sign unequivocally over the bodily drive, Kristeva’s theoretical corpus issues the message of a psychic remainder that disrupts with a view toward replenishing theoretical accounts of language and sense. Reviewing Semiotic challenge across these two levels (the sense and science of language), the presentation suggests that Kristeva’s offerings constitute a coherent gestalt, providing an account of the feminist nature of her dual intervention. In contrast to other feminist critiques, Kristeva’s gesture hinges on its restoration of the maternal contribution to subjectivity. Against the backdrop of ‘phallogocentric’ and ‘necrophilic’ theories that strip language of a subject and strip the subject of a body, Kristeva recasts linguistic study through a metaphor of life and birthing. Yet the semiotic fragments the subject it produces, dialoguing with an unconscious curtailed by but also exceeding the symbolic order of signification. Linguistics, too, becomes fragmented in the same measure as it is more meaningfully renewed by its confrontation with the semiotic body. It is Kristeva’s own body that issues this challenge, on both sides of the boundary between the theory and the theorized. The Semiotic becomes comprehensible as a project unified by its concern to disrupt and rehabilitate language, the subject, and the scholarly discourses that treat them.Keywords: Julia kristeva, the Semiotic, french feminism, psychoanalysic theory, linguistics
Procedia PDF Downloads 758391 The Legal and Regulatory Gaps of Blockchain-Enabled Energy Prosumerism
Authors: Karisma Karisma, Pardis Moslemzadeh Tehrani
Abstract:
This study aims to conduct a high-level strategic dialogue on the lack of consensus, consistency, and legal certainty regarding blockchain-based energy prosumerism so that appropriate institutional and governance structures can be put in place to address the inadequacies and gaps in the legal and regulatory framework. The drive to achieve national and global decarbonization targets is a driving force behind climate goals and policies under the Paris Agreement. In recent years, efforts to ‘demonopolize’ and ‘decentralize’ energy generation and distribution have driven the energy transition toward decentralized systems, invoking concepts such as ownership, sovereignty, and autonomy of RE sources. The emergence of individual and collective forms of prosumerism and the rapid diffusion of blockchain is expected to play a critical role in the decarbonization and democratization of energy systems. However, there is a ‘regulatory void’ relating to individual and collective forms of prosumerism that could prevent the rapid deployment of blockchain systems and potentially stagnate the operationalization of blockchain-enabled energy sharing and trading activities. The application of broad and facile regulatory fixes may be insufficient to address the major regulatory gaps. First, to the authors’ best knowledge, the concepts and elements circumjacent to individual and collective forms of prosumerism have not been adequately described in the legal frameworks of many countries. Second, there is a lack of legal certainty regarding the creation and adaptation of business models in a highly regulated and centralized energy system, which inhibits the emergence of prosumer-driven niche markets. There are also current and prospective challenges relating to the legal status of blockchain-based platforms for facilitating energy transactions, anticipated with the diffusion of blockchain technology. With the rise of prosumerism in the energy sector, the areas of (a) network charges, (b) energy market access, (c) incentive schemes, (d) taxes and levies, and (e) licensing requirements are still uncharted territories in many countries. The uncertainties emanating from this area pose a significant hurdle to the widespread adoption of blockchain technology, a complementary technology that offers added value and competitive advantages for energy systems. The authors undertake a conceptual and theoretical investigation to elucidate the lack of consensus, consistency, and legal certainty in the study of blockchain-based prosumerism. In addition, the authors set an exploratory tone to the discussion by taking an analytically eclectic approach that builds on multiple sources and theories to delve deeper into this topic. As an interdisciplinary study, this research accounts for the convergence of regulation, technology, and the energy sector. The study primarily adopts desk research, which examines regulatory frameworks and conceptual models for crucial policies at the international level to foster an all-inclusive discussion. With their reflections and insights into the interaction of blockchain and prosumerism in the energy sector, the authors do not aim to develop definitive regulatory models or instrument designs, but to contribute to the theoretical dialogue to navigate seminal issues and explore different nuances and pathways. Given the emergence of blockchain-based energy prosumerism, identifying the challenges, gaps and fragmentation of governance regimes is key to facilitating global regulatory transitions.Keywords: blockchain technology, energy sector, prosumer, legal and regulatory.
Procedia PDF Downloads 1818390 Hand Movements and the Effect of Using Smart Teaching Aids: Quality of Writing Styles Outcomes of Pupils with Dysgraphia
Authors: Sadeq Al Yaari, Muhammad Alkhunayn, Sajedah Al Yaari, Adham Al Yaari, Ayman Al Yaari, Montaha Al Yaari, Ayah Al Yaari, Fatehi Eissa
Abstract:
Dysgraphia is a neurological disorder of written expression that impairs writing ability and fine motor skills, resulting primarily in problems relating not only to handwriting but also to writing coherence and cohesion. We investigate the properties of smart writing technology to highlight some unique features of the effects they cause on the academic performance of pupils with dysgraphia. In Amis, dysgraphics undergo writing problems to express their ideas due to ordinary writing aids, as the default strategy. The Amis data suggests a possible connection between available writing aids and pupils’ writing improvement; therefore, texts’ expression and comprehension. A group of thirteen dysgraphic pupils were placed in a regular classroom of primary school, with twenty-one pupils being recruited in the study as a control group. To ensure validity, reliability and accountability to the research, both groups studied writing courses for two semesters, of which the first was equipped with smart writing aids while the second took place in an ordinary classroom. Two pre-tests were undertaken at the beginning of the first two semesters, and two post-tests were administered at the end of both semesters. Tests examined pupils’ ability to write coherent, cohesive and expressive texts. The dysgraphic group received the treatment of a writing course in the first semester in classes with smart technology and produced significantly greater increases in writing expression than in an ordinary classroom, and their performance was better than that of the control group in the second semester. The current study concludes that using smart teaching aids is a ‘MUST’, both for teaching and learning dysgraphia. Furthermore, it is demonstrated that for young dysgraphia, expressive tasks are more challenging than coherent and cohesive tasks. The study, therefore, supports the literature suggesting a role for smart educational aids in writing and that smart writing techniques may be an efficient addition to regular educational practices, notably in special educational institutions and speech-language therapeutic facilities. However, further research is needed to prompt the adults with dysgraphia more often than is done to the older adults without dysgraphia in order to get them to finish the other productive and/or written skills tasks.Keywords: smart technology, writing aids, pupils with dysgraphia, hands’ movement
Procedia PDF Downloads 388389 Localization of Buried People Using Received Signal Strength Indication Measurement of Wireless Sensor
Authors: Feng Tao, Han Ye, Shaoyi Liao
Abstract:
City constructions collapse after earthquake and people will be buried under ruins. Search and rescue should be conducted as soon as possible to save them. Therefore, according to the complicated environment, irregular aftershocks and rescue allow of no delay, a kind of target localization method based on RSSI (Received Signal Strength Indication) is proposed in this article. The target localization technology based on RSSI with the features of low cost and low complexity has been widely applied to nodes localization in WSN (Wireless Sensor Networks). Based on the theory of RSSI transmission and the environment impact to RSSI, this article conducts the experiments in five scenes, and multiple filtering algorithms are applied to original RSSI value in order to establish the signal propagation model with minimum test error respectively. Target location can be calculated from the distance, which can be estimated from signal propagation model, through improved centroid algorithm. Result shows that the localization technology based on RSSI is suitable for large-scale nodes localization. Among filtering algorithms, mixed filtering algorithm (average of average, median and Gaussian filtering) performs better than any other single filtering algorithm, and by using the signal propagation model, the minimum error of distance between known nodes and target node in the five scene is about 3.06m.Keywords: signal propagation model, centroid algorithm, localization, mixed filtering, RSSI
Procedia PDF Downloads 3008388 A Unified Approach for Digital Forensics Analysis
Authors: Ali Alshumrani, Nathan Clarke, Bogdan Ghite, Stavros Shiaeles
Abstract:
Digital forensics has become an essential tool in the investigation of cyber and computer-assisted crime. Arguably, given the prevalence of technology and the subsequent digital footprints that exist, it could have a significant role across almost all crimes. However, the variety of technology platforms (such as computers, mobiles, Closed-Circuit Television (CCTV), Internet of Things (IoT), databases, drones, cloud computing services), heterogeneity and volume of data, forensic tool capability, and the investigative cost make investigations both technically challenging and prohibitively expensive. Forensic tools also tend to be siloed into specific technologies, e.g., File System Forensic Analysis Tools (FS-FAT) and Network Forensic Analysis Tools (N-FAT), and a good deal of data sources has little to no specialist forensic tools. Increasingly it also becomes essential to compare and correlate evidence across data sources and to do so in an efficient and effective manner enabling an investigator to answer high-level questions of the data in a timely manner without having to trawl through data and perform the correlation manually. This paper proposes a Unified Forensic Analysis Tool (U-FAT), which aims to establish a common language for electronic information and permit multi-source forensic analysis. Core to this approach is the identification and development of forensic analyses that automate complex data correlations, enabling investigators to investigate cases more efficiently. The paper presents a systematic analysis of major crime categories and identifies what forensic analyses could be used. For example, in a child abduction, an investigation team might have evidence from a range of sources including computing devices (mobile phone, PC), CCTV (potentially a large number), ISP records, and mobile network cell tower data, in addition to third party databases such as the National Sex Offender registry and tax records, with the desire to auto-correlate and across sources and visualize in a cognitively effective manner. U-FAT provides a holistic, flexible, and extensible approach to providing digital forensics in technology, application, and data-agnostic manner, providing powerful and automated forensic analysis.Keywords: digital forensics, evidence correlation, heterogeneous data, forensics tool
Procedia PDF Downloads 1968387 IOT Based Process Model for Heart Monitoring Process
Authors: Dalyah Y. Al-Jamal, Maryam H. Eshtaiwi, Liyakathunisa Syed
Abstract:
Connecting health services with technology has a huge demand as people health situations are becoming worse day by day. In fact, engaging new technologies such as Internet of Things (IOT) into the medical services can enhance the patient care services. Specifically, patients suffering from chronic diseases such as cardiac patients need a special care and monitoring. In reality, some efforts were previously taken to automate and improve the patient monitoring systems. However, the previous efforts have some limitations and lack the real-time feature needed for chronic kind of diseases. In this paper, an improved process model for patient monitoring system specialized for cardiac patients is presented. A survey was distributed and interviews were conducted to gather the needed requirements to improve the cardiac patient monitoring system. Business Process Model and Notation (BPMN) language was used to model the proposed process. In fact, the proposed system uses the IOT Technology to assist doctors to remotely monitor and follow-up with their heart patients in real-time. In order to validate the effectiveness of the proposed solution, simulation analysis was performed using Bizagi Modeler tool. Analysis results show performance improvements in the heart monitoring process. For the future, authors suggest enhancing the proposed system to cover all the chronic diseases.Keywords: IoT, process model, remote patient monitoring system, smart watch
Procedia PDF Downloads 3328386 Shared Versus Pooled Automated Vehicles: Exploring Behavioral Intentions Towards On-Demand Automated Vehicles
Authors: Samira Hamiditehrani
Abstract:
Automated vehicles (AVs) are emerging technologies that could potentially offer a wide range of opportunities and challenges for the transportation sector. The advent of AV technology has also resulted in new business models in shared mobility services where many ride hailing and car sharing companies are developing on-demand AVs including shared automated vehicles (SAVs) and pooled automated vehicles (Pooled AVs). SAVs and Pooled AVs could provide alternative shared mobility services which encourage sustainable transport systems, mitigate traffic congestion, and reduce automobile dependency. However, the success of on-demand AVs in addressing major transportation policy issues depends on whether and how the public adopts them as regular travel modes. To identify conditions under which individuals may adopt on-demand AVs, previous studies have applied human behavior and technology acceptance theories, where Theory of Planned Behavior (TPB) has been validated and is among the most tested in on-demand AV research. In this respect, this study has three objectives: (a) to propose and validate a theoretical model for behavioral intention to use SAVs and Pooled AVs by extending the original TPB model; (b) to identify the characteristics of early adopters of SAVs, who prefer to have a shorter and private ride, versus prospective users of Pooled AVs, who choose more affordable but longer and shared trips; and (c) to investigate Canadians’ intentions to adopt on-demand AVs for regular trips. Toward this end, this study uses data from an online survey (n = 3,622) of workers or adult students (18 to 75 years old) conducted in October and November 2021 for six major Canadian metropolitan areas: Toronto, Vancouver, Ottawa, Montreal, Calgary, and Hamilton. To accomplish the goals of this study, a base bivariate ordered probit model, in which both SAV and Pooled AV adoptions are estimated as ordered dependent variables, alongside a full structural equation modeling (SEM) system are estimated. The findings of this study indicate that affective motivations such as attitude towards AV technology, perceived privacy, and subjective norms, matter more than sociodemographic and travel behavior characteristic in adopting on-demand AVs. Also, the results of second objective provide evidence that although there are a few affective motivations, such as subjective norms and having ample knowledge, that are common between early adopters of SAVs and PooledAVs, many examined motivations differ among SAV and Pooled AV adoption factors. In other words, motivations influencing intention to use on-demand AVs differ among the service types. Likewise, depending on the types of on-demand AVs, the sociodemographic characteristics of early adopters differ significantly. In general, findings paint a complex picture with respect to the application of constructs from common technology adoption models to the study of on-demand AVs. Findings from the final objective suggest that policymakers, planners, the vehicle and technology industries, and the public at large should moderate their expectations that on-demand AVs may suddenly transform the entire transportation sector. Instead, this study suggests that SAVs and Pooled AVs (when they entire the Canadian market) are likely to be adopted as supplementary mobility tools rather than substitutions for current travel modesKeywords: automated vehicles, Canadian perception, theory of planned behavior, on-demand AVs
Procedia PDF Downloads 738385 War and Peace in the Hands of the Media: Review of Global Media Reports and Their Influencing Factors on the Foreign and Security Policy Opinions of the Population
Authors: Ismahane Emma Karima Bessi
Abstract:
Military sociology is largely avoided. Discussing the military as a societal phenomenon and the social dimensions of war and peace is now considered a disgraceful and neglected province of social science that has a major impact on global populations. The first official press war began with William Howard Russell in the mid-19th century. The media are crucial to war and peace. Even Gaius Julius Caesar, with his "commentarii bello gallico", was a media tool to influence his warfare. Napoleon Bonaparte also knew how important the press was for his actions. This shows how important history is for crisis and war journalism. The one-sided media coverage that every country is confronted with ultimately prevents people from having a certain interest in the truth and from gross knowledge gaps in order to get an accurate picture of reality. There is a need to examine the relationship between the military, war, and the media to look at the modality in which the media is involved in military conflicts, in this case, as an adjunct, i.e., war because of the media. These are promoted or initiated by the following factors: photos intended for the visual manipulation of the population, the pressure from politicians and parties who are urging and exerting their influence on the global media to share the same pattern of opinion, and, most importantly, the media profiting from the war by listening to popular reactions and passing them on promoting with new visuals. These influence political elections. The media occupies a huge and ubiquitous part of the population. These have the ability to make a country that is in constant crisis and war mode appear in a brilliant light of peace. An article or photograph taken by one journalist has a tremendous impact as it can control the minds of millions of people. Most wars currently have state-political reasons. The parties, therefore, want to have their (potential) voters on their side, who are inflated by the media. The military is loathed or loved. Thinking must be created that a well-trained military in the instances of natural sciences, history, and sociology can save or protect the lives of many people. Theoretical methods for this are defined and evaluated in more detail in this paper.Keywords: war, history, military, science, journalism, crisis
Procedia PDF Downloads 838384 The Concentration of Selected Cosmogenic and Anthropogenic Radionuclides in the Ground Layer of the Atmosphere (Polar and Mid-Latitudes Regions)
Authors: A. Burakowska, M. Piotrowski, M. Kubicki, H. Trzaskowska, R. Sosnowiec, B. Myslek-Laurikainen
Abstract:
The most important source of atmospheric radioactivity are radionuclides generated as a result of the impact of primary and secondary cosmic radiation, with the nuclei of nitrogen oxygen and carbon in the upper troposphere and lower stratosphere. This creates about thirty radioisotopes of more than twenty elements. For organisms, the four of them are most important: ³H, ⁷Be, ²²Na, ¹⁴C. The natural radionuclides, which are present in Earth crust, also settle on dust and particles of water vapor. By this means, the derivatives of uranium and thorium, and long-life 40K get into the air. ¹³⁷Cs is the most widespread isotope, that is implemented by humans into the environment. To determine the concentration of radionuclides in the atmosphere, high volume air samplers were used, where the aerosol collection took place on a special filter fabric (Petrianov filter tissue FPP-15-1.5). In 2002 the high volume air sampler AZA-1000 was installed at the Polish Polar Observatory of the Polish Academy of Science in Hornsund, Spitsbergen (77°00’N, 15°33’E), designed to operate in all weather conditions of the cold polar region. Since 1991 (with short breaks) the ASS-500 air sampler has been working, which is located in Swider at the Kalinowski Geophysical Observatory of Geophysics Institute of the Polish Academy of Science (52°07’N, 21°15’E). The following results of radionuclides concentrations were obtained from both stations using gamma spectroscopy analysis: ⁷Be, ¹³⁷Cs, ¹³⁴Cs, ²¹⁰Pb, ⁴⁰K. For gamma spectroscopy analysis HPGe (High Purity Germanium) detector were used. These data were compared with each other. The preliminary results gave evidence that radioactivity measured in aerosols is not proportional to the amount of dust for both studied regions. Furthermore, the results indicate annual variability (seasonal fluctuations) as well as a decrease in the average activity of ⁷Be with increasing latitude. The content of ⁷Be in surface air also indicates the relationship with solar activity cycles.Keywords: aerosols, air filters, atmospheric beryllium, environmental radionuclides, gamma spectroscopy, mid-latitude regions radionuclides, polar regions radionuclides, solar cycles
Procedia PDF Downloads 1418383 Ontology-Navigated Tutoring System for Flipped-Mastery Model
Authors: Masao Okabe
Abstract:
Nowadays, in Japan, variety of students get into a university and one of the main roles of introductory courses for freshmen is to make such students well prepared for subsequent intermediate courses. For that purpose, the flipped-mastery model is not enough because videos usually used in a flipped classroom is not adaptive and does not fit all freshmen with different academic performances. This paper proposes an ontology-navigated tutoring system called EduGraph. Using EduGraph, students can prepare for and review a class, in a more flexibly personalizable way than by videos. Structuralizing learning materials by its ontology, EduGraph also helps students integrate what they learn as knowledge, and makes learning materials sharable. EduGraph was used for an introductory course for freshmen. This application suggests that EduGraph is effective.Keywords: adaptive e-learning, flipped classroom, mastery learning, ontology
Procedia PDF Downloads 2808382 Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator
Authors: Wedad Albalawi
Abstract:
The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is defined as a closed subset contains real numbers. Then the inequalities of time scales version have received a lot of attention and has had a major field in both pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on double integrals to obtain new time-scale inequalities of Copson driven by Steklov operator. They will be applied in the solution of the Cauchy problem for the wave equation. The proof can be done by introducing restriction on the operator in several cases. In addition, the obtained inequalities done by using some concepts in time scale version such as time scales calculus, theorem of Fubini and the inequality of H¨older.Keywords: time scales, inequality of Hardy, inequality of Coposon, Steklov operator
Procedia PDF Downloads 768381 Technoscience in the Information Society
Authors: A. P. Moiseeva, Z. S. Zavyalova
Abstract:
This paper focuses on the Technoscience phenomenon and its role in modern society. It gives a review of the latest research on Technoscience. Based on the works of Paul Forman, Bernadette Bensaude-Vincent, Bruno Latour, Maria Caramez Carlotto and others, the authors consider the concept of Technoscience, its specific character and prospects of its development.Keywords: technoscience, information society, transdisciplinarity, European Technology Platforms
Procedia PDF Downloads 6648380 Future Research on the Resilience of Tehran’s Urban Areas Against Pandemic Crises Horizon 2050
Authors: Farzaneh Sasanpour, Saeed Amini Varaki
Abstract:
Resilience is an important goal for cities as urban areas face an increasing range of challenges in the 21st century; therefore, according to the characteristics of risks, adopting an approach that responds to sensitive conditions in the risk management process is the resilience of cities. In the meantime, most of the resilience assessments have dealt with natural hazards and less attention has been paid to pandemics.In the covid-19 pandemic, the country of Iran and especially the metropolis of Tehran, was not immune from the crisis caused by its effects and consequences and faced many challenges. One of the methods that can increase the resilience of Tehran's metropolis against possible crises in the future is future studies. This research is practical in terms of type. The general pattern of the research will be descriptive-analytical and from the point of view that it is trying to communicate between the components and provide urban resilience indicators with pandemic crises and explain the scenarios, its future studies method is exploratory. In order to extract and determine the key factors and driving forces effective on the resilience of Tehran's urban areas against pandemic crises (Covid-19), the method of structural analysis of mutual effects and Micmac software was used. Therefore, the primary factors and variables affecting the resilience of Tehran's urban areas were set in 5 main factors, including physical-infrastructural (transportation, spatial and physical organization, streets and roads, multi-purpose development) with 39 variables based on mutual effects analysis. Finally, key factors and variables in five main areas, including managerial-institutional with five variables; Technology (intelligence) with 3 variables; economic with 2 variables; socio-cultural with 3 variables; and physical infrastructure, were categorized with 7 variables. These factors and variables have been used as key factors and effective driving forces on the resilience of Tehran's urban areas against pandemic crises (Covid-19), in explaining and developing scenarios. In order to develop the scenarios for the resilience of Tehran's urban areas against pandemic crises (Covid-19), intuitive logic, scenario planning as one of the future research methods and the Global Business Network (GBN) model were used. Finally, four scenarios have been drawn and selected with a creative method using the metaphor of weather conditions, which is indicative of the general outline of the conditions of the metropolis of Tehran in that situation. Therefore, the scenarios of Tehran metropolis were obtained in the form of four scenarios: 1- solar scenario (optimal governance and management leading in smart technology) 2- cloud scenario (optimal governance and management following in intelligent technology) 3- dark scenario (optimal governance and management Unfavorable leader in intelligence technology) 4- Storm scenario (unfavorable governance and management of follower in intelligence technology). The solar scenario shows the best situation and the stormy scenario shows the worst situation for the Tehran metropolis. According to the findings obtained in this research, city managers can, in order to achieve a better tomorrow for the metropolis of Tehran, in all the factors and components of urban resilience against pandemic crises by using future research methods, a coherent picture with the long-term horizon of 2050, from the path Provide urban resilience movement and platforms for upgrading and increasing the capacity to deal with the crisis. To create the necessary platforms for the realization, development and evolution of the urban areas of Tehran in a way that guarantees long-term balance and stability in all dimensions and levels.Keywords: future research, resilience, crisis, pandemic, covid-19, Tehran
Procedia PDF Downloads 678379 Assessing Soft Skills In Accounting Programmes: Insights From South African University Lecturers
Authors: Dolly Nyaguthii Wanjau
Abstract:
This study contributes to our understanding of how lecturers assess soft skills in accounting programmes, with the intention of producing graduates that are better prepared for the world of work. Insights were obtained through semi-structured interviews with twelve South African universities that offer chartered accountant training and accredited by SAICA. It was found that the lecturers assessed soft skills using traditional methods of assessments such as tests, assignments, and examinations. However, there were missed opportunities to embrace ICT tools in the assessment process, and this could be attributed to a lack of resources within the participating universities. Given the increasing use of digital tools for business activities, it is important that ICT tools be embraced as an inseparable part of soft skills because employers are increasingly looking for accounting graduates with digital skills.Keywords: accounting, assessment, ICT skills, SAICA, soft skills
Procedia PDF Downloads 1298378 A Case Study of Clinicians’ Perceptions of Enterprise Content Management at Tygerberg Hospital
Authors: Temitope O. Tokosi
Abstract:
Healthcare is a human right. The sensitivity of health issues has necessitated the introduction of Enterprise Content Management (ECM) at district hospitals in the Western Cape Province of South Africa. The objective is understanding clinicians’ perception of ECM at their workplace. It is a descriptive case study design of constructivist paradigm. It employed a phenomenological data analysis method using a pattern matching deductive based analytical procedure. Purposive and s4nowball sampling techniques were applied in selecting participants. Clinicians expressed concerns and frustrations using ECM such as, non-integration with other hospital systems. Inadequate access points to ECM. Incorrect labelling of notes and bar-coding causes more time wasted in finding information. System features and/or functions (such as search and edit) are not possible. Hospital management and clinicians are not constantly interacting and discussing. Information turnaround time is unacceptably lengthy. Resolving these problems would involve a positive working relationship between hospital management and clinicians. In addition, prioritising the problems faced by clinicians in relation to relevance can ensure problem-solving in order to meet clinicians’ expectations and hospitals’ objective. Clinicians’ perception should invoke attention from hospital management with regards technology use. The study’s results can be generalised across clinician groupings exposed to ECM at various district hospitals because of professional and hospital homogeneity.Keywords: clinician, electronic content management, hospital, perception, technology
Procedia PDF Downloads 2338377 Analysis of Engagement Methods in the College Classroom Post Pandemic
Authors: Marsha D. Loda
Abstract:
College enrollment is declining and generation Z, today’s college students, are struggling. Before the pandemic, researchers characterized this generational cohort as unique. Gen Z has been called the most achievement-oriented generation, as they enjoy greater economic status, are more racially and ethnically diverse, and better educated than any other generation. However, they are also the most likely generation to suffer from depression and anxiety. Gen Z has grown up largely with usually well-intentioned but overprotective parents who inadvertently kept them from learning life skills, likely impacting their ability to cope with and to effectively manage challenges. The unprecedented challenges resulting from the pandemic up ended their world and left them emotionally reeling. One of the ramifications of this for higher education is how to reengage current Gen Z students in the classroom. This research presents qualitative findings from 24 single-spaced pages of verbatim comments from college students. Research questions concerned what helps them learn and what they abhor, as well as how to engage them with the university outside of the classroom to aid in retention. Students leave little doubt about what they want to experience in the classroom. In order of mention, students want discussion, to engage with questions, to hear how a topic relates to real life and the real world, to feel connections with the professor and fellow students, and to have an opportunity to give their opinions. They prefer a classroom that involves conversation, with interesting topics and active learning. “professor talks instead of lecturing” “professor builds a connection with the classroom” “I am engaged because it feels like a respectful conversation” Similarly, students are direct about what they dislike in a classroom. In order of frequency, students dislike teachers unenthusiastically reading word or word from notes or presentations, repeating the text without adding examples, or addressing how to apply the information. “All lecture. I can read the book myself” “Not taught how to apply the skill or lesson” “Lectures the entire time. Lesson goes in one ear and out the other.” Pertaining to engagement outside the classroom, Gen Z challenges higher education to step outside the box. They don’t want to just hear from professionals in their field, they want to meet and interact with them. Perhaps because of their dependence on technology and pandemic isolation, they seem to reach out for assistance in forming social bonds. “I believe fun and social events are the best way to connect with students and get them involved. Cookouts, raffles, socials, or networking events would all most likely appeal to many students”. “Events… even if they aren’t directly related to learning. Maybe like movie nights… doing meet ups at restaurants”. Qualitative research suggests strategy. This research is rife with strategic implications to improve learning, increase engagement and reduce drop-out rates among Generation Z higher education students. It also compliments existing research on student engagement. With college enrollment declining by some 1.3 million students over the last two years, this research is both timely and important.Keywords: college enrollment, generation Z, higher education, pandemic, student engagement
Procedia PDF Downloads 1058376 Using Multiple Intelligences Theory to Develop Thai Language Skill
Authors: Bualak Naksongkaew
Abstract:
The purposes of this study were to compare pre- and post-test achievement of Thai language skills. The samples consisted of 40 tenth grader of Secondary Demonstration School of Suan Sunandha Rajabhat University in the first semester of the academic year 2010. The researcher prepared the Thai lesson plans, the pre- and post-achievement test at the end program. Data analyses were carried out using means, standard deviations and descriptive statistics, independent samples t-test analysis for comparison pre- and post-test. The study showed that there were a statistically significant difference at α= 0.05; therefore the use multiple intelligences theory can develop Thai languages skills. The results after using the multiple intelligences theory for Thai lessons had higher level than standard.Keywords: multiple intelligences theory, Thai language skills, development, pre- and post-test achievement
Procedia PDF Downloads 4258375 Artificial Intelligence for Generative Modelling
Authors: Shryas Bhurat, Aryan Vashistha, Sampreet Dinakar Nayak, Ayush Gupta
Abstract:
As the technology is advancing more towards high computational resources, there is a paradigm shift in the usage of these resources to optimize the design process. This paper discusses the usage of ‘Generative Design using Artificial Intelligence’ to build better models that adapt the operations like selection, mutation, and crossover to generate results. The human mind thinks of the simplest approach while designing an object, but the intelligence learns from the past & designs the complex optimized CAD Models. Generative Design takes the boundary conditions and comes up with multiple solutions with iterations to come up with a sturdy design with the most optimal parameter that is given, saving huge amounts of time & resources. The new production techniques that are at our disposal allow us to use additive manufacturing, 3D printing, and other innovative manufacturing techniques to save resources and design artistically engineered CAD Models. Also, this paper discusses the Genetic Algorithm, the Non-Domination technique to choose the right results using biomimicry that has evolved for current habitation for millions of years. The computer uses parametric models to generate newer models using an iterative approach & uses cloud computing to store these iterative designs. The later part of the paper compares the topology optimization technology with Generative Design that is previously being used to generate CAD Models. Finally, this paper shows the performance of algorithms and how these algorithms help in designing resource-efficient models.Keywords: genetic algorithm, bio mimicry, generative modeling, non-dominant techniques
Procedia PDF Downloads 1498374 Multi-Objective Optimization in Carbon Abatement Technology Cycles (CAT) and Related Areas: Survey, Developments and Prospects
Authors: Hameed Rukayat Opeyemi, Pericles Pilidis, Pagone Emanuele
Abstract:
An infinitesimal increase in performance can have immense reduction in operating and capital expenses in a power generation system. Therefore, constant studies are being carried out to improve both conventional and novel power cycles. Globally, power producers are constantly researching on ways to minimize emission and to collectively downsize the total cost rate of power plants. A substantial spurt of developmental technologies of low carbon cycles have been suggested and studied, however they all have their limitations and financial implication. In the area of carbon abatement in power plants, three major objectives conflict: The cost rate of the plant, Power output and Environmental impact. Since, an increase in one of this parameter directly affects the other. This poses a multi-objective problem. It is paramount to be able to discern the point where improving one objective affects the other. Hence, the need for a Pareto-based optimization algorithm. Pareto-based optimization algorithm helps to find those points where improving one objective influences another objective negatively and stops there. The application of Pareto-based optimization algorithm helps the user/operator/designer make an informed decision. This paper sheds more light on areas that multi-objective optimization has been applied in carbon abatement technologies in the last five years, developments and prospects.Keywords: gas turbine, low carbon technology, pareto optimal, multi-objective optimization
Procedia PDF Downloads 7918373 Combined Odd Pair Autoregressive Coefficients for Epileptic EEG Signals Classification by Radial Basis Function Neural Network
Authors: Boukari Nassim
Abstract:
This paper describes the use of odd pair autoregressive coefficients (Yule _Walker and Burg) for the feature extraction of electroencephalogram (EEG) signals. In the classification: the radial basis function neural network neural network (RBFNN) is employed. The RBFNN is described by his architecture and his characteristics: as the RBF is defined by the spread which is modified for improving the results of the classification. Five types of EEG signals are defined for this work: Set A, Set B for normal signals, Set C, Set D for interictal signals, set E for ictal signal (we can found that in Bonn university). In outputs, two classes are given (AC, AD, AE, BC, BD, BE, CE, DE), the best accuracy is calculated at 99% for the combined odd pair autoregressive coefficients. Our method is very effective for the diagnosis of epileptic EEG signals.Keywords: epilepsy, EEG signals classification, combined odd pair autoregressive coefficients, radial basis function neural network
Procedia PDF Downloads 3468372 Uncovering the Relationship between EFL Students' Self-Concept and Their Willingness to Communicate in Language Classes
Authors: Seyedeh Khadijeh Amirian, Seyed Mohammad Reza Amirian, Narges Hekmati
Abstract:
The current study aims at examining the relationship between English as a foreign language (EFL) students' self-concept and their willingness to communicate (WTC) in EFL classes. To this effect, two questionnaires, namely 'Willingness to Communicate' (MacIntyre et al., 2001) and 'Self-Concept Scale' (Liu and Wang, 2005), were distributed among 174 (45 males and 129 females) Iranian EFL university students. Correlation and regression analyses were conducted to examine the relationship between the two variables. The results indicated that there was a significantly positive correlation between EFL students' self-concept and their WTC in EFL classes (p < .0.05). Moreover, regression analyses indicated that self-concept has a significantly positive influence on students’ WTC in language classes (B= .302, p < .0.05) and explains .302 percent of the variance in the dependent variable (WTC). The results are discussed with regards to the individual differences in educational contexts, and implications are offered.Keywords: EFL students, language classes, willingness to communicate, self-concept
Procedia PDF Downloads 1268371 Comparison of the Thermal Behavior of Different Crystal Forms of Manganese(II) Oxalate
Authors: B. Donkova, M. Nedyalkova, D. Mehandjiev
Abstract:
Sparingly soluble manganese oxalate is an appropriate precursor for the preparation of nanosized manganese oxides, which have a wide range of technological application. During the precipitation of manganese oxalate, three crystal forms could be obtained – α-MnC₂O₄.2H₂O (SG C2/c), γ-MnC₂O₄.2H₂O (SG P212121) and orthorhombic MnC₂O₄.3H₂O (SG Pcca). The thermolysis of α-MnC₂O₄.2H₂O has been extensively studied during the years, while the literature data for the other two forms has been quite scarce. The aim of the present communication is to highlight the influence of the initial crystal structure on the decomposition mechanism of these three forms, their magnetic properties, the structure of the anhydrous oxalates, as well as the nature of the obtained oxides. For the characterization of the samples XRD, SEM, DTA, TG, DSC, nitrogen adsorption, and in situ magnetic measurements were used. The dehydration proceeds in one step with α-MnC₂O₄.2H2O and γ-MnC₂O₄.2H₂O, and in three steps with MnC₂O₄.3H2O. The values of dehydration enthalpy are 97, 149 and 132 kJ/mol, respectively, and the last two were reported for the first time, best to our knowledge. The magnetic measurements show that at room temperature all samples are antiferomagnetic, however during the dehydration of α-MnC₂O₄.2H₂O the exchange interaction is preserved, for MnC₂O₄.3H₂O it changes to ferromagnetic above 35°C, and for γ-MnC₂O₄.2H₂O it changes twice from antiferomagnetic to ferromagnetic above 70°C. The experimental results for magnetic properties are in accordance with the computational results obtained with Wien2k code. The difference in the initial crystal structure of the forms used determines different changes in the specific surface area during dehydration and different extent of Mn(II) oxidation during decomposition in the air; both being highest at α-MnC₂O₄.2H₂O. The isothermal decomposition of the different oxalate forms shows that the type and physicochemical properties of the oxides, obtained at the same annealing temperature depend on the precursor used. Based on the results from the non-isothermal and isothermal experiments, and from different methods used for characterization of the sample, a comparison of the nature, mechanism and peculiarities of the thermolysis of the different crystal forms of manganese oxalate was made, which clearly reveals the influence of the initial crystal structure. Acknowledgment: 'Science and Education for Smart Growth', project BG05M2OP001-2.009-0028, COST Action MP1306 'Modern Tools for Spectroscopy on Advanced Materials', and project DCOST-01/18 (Bulgarian Science Fund).Keywords: crystal structure, magnetic properties, manganese oxalate, thermal behavior
Procedia PDF Downloads 1718370 Effect of Different Salt Concentrations and Temperatures on Seed Germination and Seedling Characters in Safflower (Carthamus tinctorius L.) Genotypes
Authors: Rahim Ada, Zamari Temory, Hasan Dalgic
Abstract:
Germination and seedling responses of seven safflower seed genotypes (Dinçer, Remzibey, Black Sun2 cultivars and A19, F4, I1, J19 lines) to different salinity concentrations (0, 5, 10, and 20 g l-1) and temperatures (10 and 20 oC) evaluated in Completely Randomized Factorial Designs in Department of Field Crops of Selcuk University, Konya, Turkey. Seeds in the control (distilled water) had at 10 and 20 oC the highest germination percentage (93.88 and 94.32 %), shoot length (4.60 and 8.72 cm), root length (4.27 and 6.54 cm), shoot dry weight (22.37 mg and 25.99 mg), and root dry weight (2.22 and 2.47 mg). As the salt concentration increased, values of all characters were decreased. In this experiment, in 20 g l-1 salt concentration found germination percentage (21.28 and 26.66 %), shoot (1.32 and 1.35 cm) and root length (1.04 and 1.10 cm), shoot (8.05 mg and 7.49 mg) and root dry weight (0.83 and 0.98 mg) at 10, and 20 oC.Keywords: safflower, NaCl, temperature, shoot and root length, salt concentration
Procedia PDF Downloads 2868369 Satisfaction of the Training at ASEAN Camp: E-Learning Knowledge and Application at Chantanaburi Province, Thailand
Authors: Sinchai Poolklai
Abstract:
The purpose of this research paper was aimed to examine the level of satisfaction of the faculty members who participated in the ASEAN camp, Chantaburi, Thailand. The population of this study included all the faculty members of Suan Sunandha Rajabhat University who participated in the training and activities of the ASEAN camp during March, 2014. Among a total of 200 faculty members who answered the questionnaire, the data was complied by using SPSS program. Percentage, mean and standard deviation were utilized in analyzing the data. The findings revealed that the average mean of satisfaction was 4.37, and standard deviation was 0.7810. Moreover, the mean average can be used to rank the level of satisfaction from each of the following factors: lower cost, less time consuming, faster delivery, more effective learning, and lower environment impact.Keywords: ASEAN camp, e-learning, satisfaction, application
Procedia PDF Downloads 3918368 Getting Out of the Box: Tangible Music Production in the Age of Virtual Technological Abundance
Authors: Tim Nikolsky
Abstract:
This paper seeks to explore the different ways in which music producers choose to embrace various levels of technology based on musical values, objectives, affordability, access and workflow benefits. Current digital audio production workflow is questioned. Engineers and music producers of today are increasingly divorced from the tangibility of music production. Making music no longer requires you to reach over and turn a knob. Ideas of authenticity in music production are being redefined. Calculations from the mathematical algorithm with the pretty pictures are increasingly being chosen over hardware containing transformers and tubes. Are mouse clicks and movements equivalent or inferior to the master brush strokes we are seeking to conjure? We are making audio production decisions visually by constantly looking at a screen rather than listening. Have we compromised our music objectives and values by removing the ‘hands-on’ nature of music making? DAW interfaces are making our musical decisions for us not necessarily in our best interests. Technological innovation has presented opportunities as well as challenges for education. What do music production students actually need to learn in a formalised education environment, and to what extent do they need to know it? In this brave new world of omnipresent music creation tools, do we still need tangibility in music production? Interviews with prominent Australian music producers that work in a variety of fields will be featured in this paper, and will provide insight in answering these questions and move towards developing an understanding how tangibility can be rediscovered in the next generation of music production.Keywords: analogue, digital, digital audio workstation, music production, plugins, tangibility, technology, workflow
Procedia PDF Downloads 2718367 Adapting Cyber Physical Production Systems to Small and Mid-Size Manufacturing Companies
Authors: Yohannes Haile, Dipo Onipede, Jr., Omar Ashour
Abstract:
The main thrust of our research is to determine Industry 4.0 readiness of small and mid-size manufacturing companies in our region and assist them to implement Cyber Physical Production System (CPPS) capabilities. Adopting CPPS capabilities will help organizations realize improved quality, order delivery, throughput, new value creation, and reduced idle time of machines and work centers of their manufacturing operations. The key metrics for the assessment include the level of intelligence, internal and external connections, responsiveness to internal and external environmental changes, capabilities for customization of products with reference to cost, level of additive manufacturing, automation, and robotics integration, and capabilities to manufacture hybrid products in the near term, where near term is defined as 0 to 18 months. In our initial evaluation of several manufacturing firms which are profitable and successful in what they do, we found low level of Physical-Digital-Physical (PDP) loop in their manufacturing operations, whereas 100% of the firms included in this research have specialized manufacturing core competencies that have differentiated them from their competitors. The level of automation and robotics integration is low to medium range, where low is defined as less than 30%, and medium is defined as 30 to 70% of manufacturing operation to include automation and robotics. However, there is a significant drive to include these capabilities at the present time. As it pertains to intelligence and connection of manufacturing systems, it is observed to be low with significant variance in tying manufacturing operations management to Enterprise Resource Planning (ERP). Furthermore, it is observed that the integration of additive manufacturing in general, 3D printing, in particular, to be low, but with significant upside of integrating it in their manufacturing operations in the near future. To hasten the readiness of the local and regional manufacturing companies to Industry 4.0 and transitions towards CPPS capabilities, our working group (ADMAR Working Group) in partnership with our university have been engaged with the local and regional manufacturing companies. The goal is to increase awareness, share know-how and capabilities, initiate joint projects, and investigate the possibility of establishing the Center for Cyber Physical Production Systems Innovation (C2P2SI). The center is intended to support the local and regional university-industry research of implementing intelligent factories, enhance new value creation through disruptive innovations, the development of hybrid and data enhanced products, and the creation of digital manufacturing enterprises. All these efforts will enhance local and regional economic development and educate students that have well developed knowledge and applications of cyber physical manufacturing systems and Industry 4.0.Keywords: automation, cyber-physical production system, digital manufacturing enterprises, disruptive innovation, new value creation, physical-digital-physical loop
Procedia PDF Downloads 1408366 Determination of Surface Deformations with Global Navigation Satellite System Time Series
Authors: Ibrahim Tiryakioglu, Mehmet Ali Ugur, Caglar Ozkaymak
Abstract:
The development of GNSS technology has led to increasingly widespread and successful applications of GNSS surveys for monitoring crustal movements. However, multi-period GPS survey solutions have not been applied in monitoring vertical surface deformation. This study uses long-term GNSS time series that are required to determine vertical deformations. In recent years, the surface deformations that are parallel and semi-parallel to Bolvadin fault have occurred in Western Anatolia. These surface deformations have continued to occur in Bolvadin settlement area that is located mostly on alluvium ground. Due to these surface deformations, a number of cracks in the buildings located in the residential areas and breaks in underground water and sewage systems have been observed. In order to determine the amount of vertical surface deformations, two continuous GNSS stations have been established in the region. The stations have been operating since 2015 and 2017, respectively. In this study, GNSS observations from the mentioned two GNSS stations were processed with GAMIT/GLOBK (GNSS Analysis Massachusetts Institute of Technology/GLOBal Kalman) program package to create a coordinate time series. With the time series analyses, the GNSS stations’ behavior models (linear, periodical, etc.), the causes of these behaviors, and mathematical models were determined. The study results from the time series analysis of these two 2 GNSS stations shows approximately 50-80 mm/yr vertical movement.Keywords: Bolvadin fault, GAMIT, GNSS time series, surface deformations
Procedia PDF Downloads 1658365 Node Optimization in Wireless Sensor Network: An Energy Approach
Authors: Y. B. Kirankumar, J. D. Mallapur
Abstract:
Wireless Sensor Network (WSN) is an emerging technology, which has great invention for various low cost applications both for mass public as well as for defence. The wireless sensor communication technology allows random participation of sensor nodes with particular applications to take part in the network, which results in most of the uncovered simulation area, where fewer nodes are located at far distances. The drawback of such network would be that the additional energy is spent by the nodes located in a pattern of dense location, using more number of nodes for a smaller distance of communication adversely in a region with less number of nodes and additional energy is again spent by the source node in order to transmit a packet to neighbours, thereby transmitting the packet to reach the destination. The proposed work is intended to develop Energy Efficient Node Placement Algorithm (EENPA) in order to place the sensor node efficiently in simulated area, where all the nodes are equally located on a radial path to cover maximum area at equidistance. The total energy consumed by each node compared to random placement of nodes is less by having equal burden on fewer nodes of far location, having distributed the nodes in whole of the simulation area. Calculating the network lifetime also proves to be efficient as compared to random placement of nodes, hence increasing the network lifetime, too. Simulation is been carried out in a qualnet simulator, results are obtained on par with random placement of nodes with EENP algorithm.Keywords: energy, WSN, wireless sensor network, energy approach
Procedia PDF Downloads 312