Search results for: disaster relief networks
2503 A Randomized Active Controlled Clinical Trial to Assess Clinical Efficacy and Safety of Tapentadol Nasal Spray in Moderate to Severe Post-Surgical Pain
Authors: Kamal Tolani, Sandeep Kumar, Rohit Luthra, Ankit Dadhania, Krishnaprasad K., Ram Gupta, Deepa Joshi
Abstract:
Background: Post-operative analgesia remains a clinical challenge, with central and peripheral sensitization playing a pivotal role in treatment-related complications and impaired quality of life. Centrally acting opioids offer poor risk benefit profile with increased intensity of gastrointestinal or central side effects and slow onset of clinical analgesia. The objective of this study was to assess the clinical feasibility of induction and maintenance therapy with Tapentadol Nasal Spray (NS) in moderate to severe acute post-operative pain. Methods: Phase III, randomized, active-controlled, non-inferiority clinical trial involving 294 cases who had undergone surgical procedures under general anesthesia or regional anesthesia. Post-surgery patients were randomized to receive either Tapentadol NS 45 mg or Tramadol 100mg IV as a bolus and subsequent 50 mg or 100 mg dose over 2-3 minutes. The frequency of administration of NS was at every 4-6 hours. At the end of 24 hrs, patients in the tramadol group who had a pain intensity score of ≥4 were switched to oral tramadol immediate release 100mg capsule until the pain intensity score reduced to <4. All patients who had achieved pain intensity ≤ 4 were shifted to a lower dose of either Tapentadol NS 22.5 mg or oral Tramadol immediate release 50mg capsule. The statistical analysis plan was envisaged as a non-inferiority trial involving comparison with Tramadol for Pain intensity difference at 60 minutes (PID60min), Sum of Pain intensity difference at 60 minutes (SPID60min), and Physician Global Assessment at 24 hrs (PGA24 hrs). Results: The per-protocol analyses involved 255 hospitalized cases undergoing surgical procedures. The median age of patients was 38.0 years. For the primary efficacy variables, Tapentadol NS was non-inferior to Inj/Oral Tramadol in relief of moderate to severe post-operative pain. On the basis of SPID60min, no clinically significant difference was observed between Tapentadol NS and Tramadol IV (1.73±2.24 vs. 1.64± 1.92, -0.09 [95% CI, -0.43, 0.60]). In the co-primary endpoint PGA24hrs, Tapentadol NS was non–inferior to Tramadol IV (2.12 ± 0.707 vs. 2.02 ±0.704, - 0.11[95% CI, -0.07, 0.28). However, on further assessment at 48hr, 72 hrs, and 120hrs, clinically superior pain relief was observed with the Tapentadol NS formulation that was statistically significant (p <0.05) at each of the time intervals. Secondary efficacy measures, including the onset of clinical analgesia and TOTPAR, showed non-inferiority to Tramadol. The safety profile and need for rescue medication were also similar in both the groups during the treatment period. The most common concomitant medications were anti-bacterial (98.3%). Conclusion: Tapentadol NS is a clinically feasible option for improved compliance as induction and maintenance therapy while offering a sustained and persistent patient response that is clinically meaningful in post-surgical settings.Keywords: tapentadol nasal spray, acute pain, tramadol, post-operative pain
Procedia PDF Downloads 2482502 Knowledge Representation Based on Interval Type-2 CFCM Clustering
Authors: Lee Myung-Won, Kwak Keun-Chang
Abstract:
This paper is concerned with knowledge representation and extraction of fuzzy if-then rules using Interval Type-2 Context-based Fuzzy C-Means clustering (IT2-CFCM) with the aid of fuzzy granulation. This proposed clustering algorithm is based on information granulation in the form of IT2 based Fuzzy C-Means (IT2-FCM) clustering and estimates the cluster centers by preserving the homogeneity between the clustered patterns from the IT2 contexts produced in the output space. Furthermore, we can obtain the automatic knowledge representation in the design of Radial Basis Function Networks (RBFN), Linguistic Model (LM), and Adaptive Neuro-Fuzzy Networks (ANFN) from the numerical input-output data pairs. We shall focus on a design of ANFN in this paper. The experimental results on an estimation problem of energy performance reveal that the proposed method showed a good knowledge representation and performance in comparison with the previous works.Keywords: IT2-FCM, IT2-CFCM, context-based fuzzy clustering, adaptive neuro-fuzzy network, knowledge representation
Procedia PDF Downloads 3222501 Modeling and Stability Analysis of Viral Propagation in Wireless Mesh Networking
Authors: Haowei Chen, Kaiqi Xiong
Abstract:
This paper aims to answer how malware will propagate in Wireless Mesh Networks (WMNs) and how communication radius and distributed density of nodes affects the process of spreading. The above analysis is essential for devising network-wide strategies to counter malware. We answer these questions by developing an improved dynamical system that models malware propagation in the area where nodes were uniformly distributed. The proposed model captures both the spatial and temporal dynamics regarding the malware spreading process. Equilibrium and stability are also discussed based on the threshold of the system. If the threshold is less than one, the infected nodes disappear, and if the threshold is greater than one, the infected nodes asymptotically stabilize at the endemic equilibrium. Numerical simulations are investigated about communication radius and distributed density of nodes in WMNs, which allows us to draw various insights that can be used to guide security defense.Keywords: Bluetooth security, malware propagation, wireless mesh networks, stability analysis
Procedia PDF Downloads 982500 Nonlinear Aerodynamic Parameter Estimation of a Supersonic Air to Air Missile by Using Artificial Neural Networks
Authors: Tugba Bayoglu
Abstract:
Aerodynamic parameter estimation is very crucial in missile design phase, since accurate high fidelity aerodynamic model is required for designing high performance and robust control system, developing high fidelity flight simulations and verification of computational and wind tunnel test results. However, in literature, there is not enough missile aerodynamic parameter identification study for three main reasons: (1) most air to air missiles cannot fly with constant speed, (2) missile flight test number and flight duration are much less than that of fixed wing aircraft, (3) variation of the missile aerodynamic parameters with respect to Mach number is higher than that of fixed wing aircraft. In addition to these challenges, identification of aerodynamic parameters for high wind angles by using classical estimation techniques brings another difficulty in the estimation process. The reason for this, most of the estimation techniques require employing polynomials or splines to model the behavior of the aerodynamics. However, for the missiles with a large variation of aerodynamic parameters with respect to flight variables, the order of the proposed model increases, which brings computational burden and complexity. Therefore, in this study, it is aimed to solve nonlinear aerodynamic parameter identification problem for a supersonic air to air missile by using Artificial Neural Networks. The method proposed will be tested by using simulated data which will be generated with a six degree of freedom missile model, involving a nonlinear aerodynamic database. The data will be corrupted by adding noise to the measurement model. Then, by using the flight variables and measurements, the parameters will be estimated. Finally, the prediction accuracy will be investigated.Keywords: air to air missile, artificial neural networks, open loop simulation, parameter identification
Procedia PDF Downloads 2792499 Embedded Hw-Sw Reconfigurable Techniques For Wireless Sensor Network Applications
Authors: B. Kirubakaran, C. Rajasekaran
Abstract:
Reconfigurable techniques are used in many engineering and industrial applications for the efficient data transmissions through the wireless sensor networks. Nowadays most of the industrial applications are work for try to minimize the size and cost. During runtime the reconfigurable technique avoid the unwanted hang and delay in the system performance. In recent world Field Programmable Gate Array (FPGA) as one of the most efficient reconfigurable device and widely used for most of the hardware and software reconfiguration applications. In this paper, the work deals with whatever going to make changes in the hardware and software during runtime it’s should not affect the current running process that’s the main objective of the paper our changes be done in a parallel manner at the same time concentrating the cost and power transmission problems during data trans-receiving. Analog sensor (Temperature) as an input for the controller (PIC) through that control the FPGA digital sensors in generalized manner.Keywords: field programmable gate array, peripheral interrupt controller, runtime reconfigurable techniques, wireless sensor networks
Procedia PDF Downloads 4072498 Comparative Analysis of Data Gathering Protocols with Multiple Mobile Elements for Wireless Sensor Network
Authors: Bhat Geetalaxmi Jairam, D. V. Ashoka
Abstract:
Wireless Sensor Networks are used in many applications to collect sensed data from different sources. Sensed data has to be delivered through sensors wireless interface using multi-hop communication towards the sink. The data collection in wireless sensor networks consumes energy. Energy consumption is the major constraints in WSN .Reducing the energy consumption while increasing the amount of generated data is a great challenge. In this paper, we have implemented two data gathering protocols with multiple mobile sinks/elements to collect data from sensor nodes. First, is Energy-Efficient Data Gathering with Tour Length-Constrained Mobile Elements in Wireless Sensor Networks (EEDG), in which mobile sinks uses vehicle routing protocol to collect data. Second is An Intelligent Agent-based Routing Structure for Mobile Sinks in WSNs (IAR), in which mobile sinks uses prim’s algorithm to collect data. Authors have implemented concepts which are common to both protocols like deployment of mobile sinks, generating visiting schedule, collecting data from the cluster member. Authors have compared the performance of both protocols by taking statistics based on performance parameters like Delay, Packet Drop, Packet Delivery Ratio, Energy Available, Control Overhead. Authors have concluded this paper by proving EEDG is more efficient than IAR protocol but with few limitations which include unaddressed issues likes Redundancy removal, Idle listening, Mobile Sink’s pause/wait state at the node. In future work, we plan to concentrate more on these limitations to avail a new energy efficient protocol which will help in improving the life time of the WSN.Keywords: aggregation, consumption, data gathering, efficiency
Procedia PDF Downloads 4972497 Mobile Network Users Amidst Ultra-Dense Networks in 5G Using an Improved Coordinated Multipoint (CoMP) Technology
Authors: Johnson O. Adeogo, Ayodele S. Oluwole, O. Akinsanmi, Olawale J. Olaluyi
Abstract:
In this 5G network, very high traffic density in densely populated areas, most especially in densely populated areas, is one of the key requirements. Radiation reduction becomes one of the major concerns to secure the future life of mobile network users in ultra-dense network areas using an improved coordinated multipoint technology. Coordinated Multi-Point (CoMP) is based on transmission and/or reception at multiple separated points with improved coordination among them to actively manage the interference for the users. Small cells have two major objectives: one, they provide good coverage and/or performance. Network users can maintain a good quality signal network by directly connecting to the cell. Two is using CoMP, which involves the use of multiple base stations (MBS) to cooperate by transmitting and/or receiving at the same time in order to reduce the possibility of electromagnetic radiation increase. Therefore, the influence of the screen guard with rubber condom on the mobile transceivers as one major piece of equipment radiating electromagnetic radiation was investigated by mobile network users amidst ultra-dense networks in 5g. The results were compared with the same mobile transceivers without screen guards and rubber condoms under the same network conditions. The 5 cm distance from the mobile transceivers was measured with the help of a ruler, and the intensity of Radio Frequency (RF) radiation was measured using an RF meter. The results show that the intensity of radiation from various mobile transceivers without screen guides and condoms was higher than the mobile transceivers with screen guides and condoms when call conversation was on at both ends.Keywords: ultra-dense networks, mobile network users, 5g, coordinated multi-point.
Procedia PDF Downloads 1032496 Instant Fire Risk Assessment Using Artifical Neural Networks
Authors: Tolga Barisik, Ali Fuat Guneri, K. Dastan
Abstract:
Major industrial facilities have a high potential for fire risk. In particular, the indices used for the detection of hidden fire are used very effectively in order to prevent the fire from becoming dangerous in the initial stage. These indices provide the opportunity to prevent or intervene early by determining the stage of the fire, the potential for hazard, and the type of the combustion agent with the percentage values of the ambient air components. In this system, artificial neural network will be modeled with the input data determined using the Levenberg-Marquardt algorithm, which is a multi-layer sensor (CAA) (teacher-learning) type, before modeling the modeling methods in the literature. The actual values produced by the indices will be compared with the outputs produced by the network. Using the neural network and the curves to be created from the resulting values, the feasibility of performance determination will be investigated.Keywords: artifical neural networks, fire, Graham Index, levenberg-marquardt algoritm, oxygen decrease percentage index, risk assessment, Trickett Index
Procedia PDF Downloads 1372495 Effectiveness of New Digital Tools on Implementing Quality Management System: An Exploratory Study of French Companies
Authors: Takwa Belwakess
Abstract:
With the wave of the digitization that invades the modern world, communication tools took their place in the world of business. As for organizations, being part of the digital era necessarily involves an evolution of the management style, mainly in processes management, knowing also as quality management system (QMS). For more than 50 years quality management standards have been adopted by organizations to prove their operational and financial performances. We believe that achieving a high-level of communication can lead to better quality management and greater customer satisfaction, which is essential to make sure long-term competitiveness. In this paper, a questionnaire survey was developed to investigate the use of collaboration tools such as Content Management System and Social Networks. Data from more than 100 companies based in France was analyzed, the results show that adopting new digital communication tools while applying quality management practices over a reasonable period, contributed to delivering a better implementation of the QMS for a better business performance.Keywords: communication tools, content management system, digital, effectiveness, French companies, quality management system, quality management practices, social networks
Procedia PDF Downloads 2662494 NFResNet: Multi-Scale and U-Shaped Networks for Deblurring
Authors: Tanish Mittal, Preyansh Agrawal, Esha Pahwa, Aarya Makwana
Abstract:
Multi-Scale and U-shaped Networks are widely used in various image restoration problems, including deblurring. Keeping in mind the wide range of applications, we present a comparison of these architectures and their effects on image deblurring. We also introduce a new block called as NFResblock. It consists of a Fast Fourier Transformation layer and a series of modified Non-Linear Activation Free Blocks. Based on these architectures and additions, we introduce NFResnet and NFResnet+, which are modified multi-scale and U-Net architectures, respectively. We also use three differ-ent loss functions to train these architectures: Charbonnier Loss, Edge Loss, and Frequency Reconstruction Loss. Extensive experiments on the Deep Video Deblurring dataset, along with ablation studies for each component, have been presented in this paper. The proposed architectures achieve a considerable increase in Peak Signal to Noise (PSNR) ratio and Structural Similarity Index (SSIM) value.Keywords: multi-scale, Unet, deblurring, FFT, resblock, NAF-block, nfresnet, charbonnier, edge, frequency reconstruction
Procedia PDF Downloads 1362493 Adaptive Energy-Aware Routing (AEAR) for Optimized Performance in Resource-Constrained Wireless Sensor Networks
Authors: Innocent Uzougbo Onwuegbuzie
Abstract:
Wireless Sensor Networks (WSNs) are crucial for numerous applications, yet they face significant challenges due to resource constraints such as limited power and memory. Traditional routing algorithms like Dijkstra, Ad hoc On-Demand Distance Vector (AODV), and Bellman-Ford, while effective in path establishment and discovery, are not optimized for the unique demands of WSNs due to their large memory footprint and power consumption. This paper introduces the Adaptive Energy-Aware Routing (AEAR) model, a solution designed to address these limitations. AEAR integrates reactive route discovery, localized decision-making using geographic information, energy-aware metrics, and dynamic adaptation to provide a robust and efficient routing strategy. We present a detailed comparative analysis using a dataset of 50 sensor nodes, evaluating power consumption, memory footprint, and path cost across AEAR, Dijkstra, AODV, and Bellman-Ford algorithms. Our results demonstrate that AEAR significantly reduces power consumption and memory usage while optimizing path weight. This improvement is achieved through adaptive mechanisms that balance energy efficiency and link quality, ensuring prolonged network lifespan and reliable communication. The AEAR model's superior performance underlines its potential as a viable routing solution for energy-constrained WSN environments, paving the way for more sustainable and resilient sensor network deployments.Keywords: wireless sensor networks (WSNs), adaptive energy-aware routing (AEAR), routing algorithms, energy, efficiency, network lifespan
Procedia PDF Downloads 362492 Requirement Engineering for Intrusion Detection Systems in Wireless Sensor Networks
Authors: Afnan Al-Romi, Iman Al-Momani
Abstract:
The urge of applying the Software Engineering (SE) processes is both of vital importance and a key feature in critical, complex large-scale systems, for example, safety systems, security service systems, and network systems. Inevitably, associated with this are risks, such as system vulnerabilities and security threats. The probability of those risks increases in unsecured environments, such as wireless networks in general and in Wireless Sensor Networks (WSNs) in particular. WSN is a self-organizing network of sensor nodes connected by wireless links. WSNs consist of hundreds to thousands of low-power, low-cost, multi-function sensor nodes that are small in size and communicate over short-ranges. The distribution of sensor nodes in an open environment that could be unattended in addition to the resource constraints in terms of processing, storage and power, make such networks in stringent limitations such as lifetime (i.e. period of operation) and security. The importance of WSN applications that could be found in many militaries and civilian aspects has drawn the attention of many researchers to consider its security. To address this important issue and overcome one of the main challenges of WSNs, security solution systems have been developed by researchers. Those solutions are software-based network Intrusion Detection Systems (IDSs). However, it has been witnessed, that those developed IDSs are neither secure enough nor accurate to detect all malicious behaviours of attacks. Thus, the problem is the lack of coverage of all malicious behaviours in proposed IDSs, leading to unpleasant results, such as delays in the detection process, low detection accuracy, or even worse, leading to detection failure, as illustrated in the previous studies. Also, another problem is energy consumption in WSNs caused by IDS. So, in other words, not all requirements are implemented then traced. Moreover, neither all requirements are identified nor satisfied, as for some requirements have been compromised. The drawbacks in the current IDS are due to not following structured software development processes by researches and developers when developing IDS. Consequently, they resulted in inadequate requirement management, process, validation, and verification of requirements quality. Unfortunately, WSN and SE research communities have been mostly impermeable to each other. Integrating SE and WSNs is a real subject that will be expanded as technology evolves and spreads in industrial applications. Therefore, this paper will study the importance of Requirement Engineering when developing IDSs. Also, it will study a set of existed IDSs and illustrate the absence of Requirement Engineering and its effect. Then conclusions are drawn in regard of applying requirement engineering to systems to deliver the required functionalities, with respect to operational constraints, within an acceptable level of performance, accuracy and reliability.Keywords: software engineering, requirement engineering, Intrusion Detection System, IDS, Wireless Sensor Networks, WSN
Procedia PDF Downloads 3222491 Adversarial Attacks and Defenses on Deep Neural Networks
Authors: Jonathan Sohn
Abstract:
Deep neural networks (DNNs) have shown state-of-the-art performance for many applications, including computer vision, natural language processing, and speech recognition. Recently, adversarial attacks have been studied in the context of deep neural networks, which aim to alter the results of deep neural networks by modifying the inputs slightly. For example, an adversarial attack on a DNN used for object detection can cause the DNN to miss certain objects. As a result, the reliability of DNNs is undermined by their lack of robustness against adversarial attacks, raising concerns about their use in safety-critical applications such as autonomous driving. In this paper, we focus on studying the adversarial attacks and defenses on DNNs for image classification. There are two types of adversarial attacks studied which are fast gradient sign method (FGSM) attack and projected gradient descent (PGD) attack. A DNN forms decision boundaries that separate the input images into different categories. The adversarial attack slightly alters the image to move over the decision boundary, causing the DNN to misclassify the image. FGSM attack obtains the gradient with respect to the image and updates the image once based on the gradients to cross the decision boundary. PGD attack, instead of taking one big step, repeatedly modifies the input image with multiple small steps. There is also another type of attack called the target attack. This adversarial attack is designed to make the machine classify an image to a class chosen by the attacker. We can defend against adversarial attacks by incorporating adversarial examples in training. Specifically, instead of training the neural network with clean examples, we can explicitly let the neural network learn from the adversarial examples. In our experiments, the digit recognition accuracy on the MNIST dataset drops from 97.81% to 39.50% and 34.01% when the DNN is attacked by FGSM and PGD attacks, respectively. If we utilize FGSM training as a defense method, the classification accuracy greatly improves from 39.50% to 92.31% for FGSM attacks and from 34.01% to 75.63% for PGD attacks. To further improve the classification accuracy under adversarial attacks, we can also use a stronger PGD training method. PGD training improves the accuracy by 2.7% under FGSM attacks and 18.4% under PGD attacks over FGSM training. It is worth mentioning that both FGSM and PGD training do not affect the accuracy of clean images. In summary, we find that PGD attacks can greatly degrade the performance of DNNs, and PGD training is a very effective way to defend against such attacks. PGD attacks and defence are overall significantly more effective than FGSM methods.Keywords: deep neural network, adversarial attack, adversarial defense, adversarial machine learning
Procedia PDF Downloads 1952490 Analyze of Nanoscale Materials and Devices for Future Communication and Telecom Networks in the Gas Refinery
Authors: Mohamad Bagher Heidari, Hefzollah Mohammadian
Abstract:
New discoveries in materials on the nanometer-length scale are expected to play an important role in addressing ongoing and future challenges in the field of communication. Devices and systems for ultra-high speed short and long range communication links, portable and power efficient computing devices, high-density memory and logics, ultra-fast interconnects, and autonomous and robust energy scavenging devices for accessing ambient intelligence and needed information will critically depend on the success of next-generation emerging nonmaterials and devices. This article presents some exciting recent developments in nonmaterials that have the potential to play a critical role in the development and transformation of future intelligent communication and telecom networks in the gas refinery. The industry is benefiting from nanotechnology advances with numerous applications including those in smarter sensors, logic elements, computer chips, memory storage devices, optoelectronics.Keywords: nonmaterial, intelligent communication, nanoscale, nanophotonic, telecom
Procedia PDF Downloads 3332489 In situ Polymerization and Properties of Biobased Polyurethane/Epoxy Interpenetrating Network Nanocomposites
Authors: Aiswarea Mathew, Smita Mohanty, Jr., S. K. Nayak
Abstract:
Polyurethane networks based on castor oil (CO) as a renewable resource polyol were synthesized. Polyurethane/epoxy resin interpenetrating network nanocomposites containing modified montmorillonite organoclay (C30B-PU/EP nanocomposites) were prepared by an in situ intercalation method. The conventional spectroscopic characterization of the synthesized samples using FT-IR confirms the existence of the proposed castor oil based PU structure and also showed that strong interactions existed between C30B and EP/PU matrix. The dispersion degree of C30B in EP/PU matrix was characterized by X-Ray diffraction (XRD) method. Scanning electronic microscopy analysis showed that the interpenetrating process of PU and EP increases the exfoliation degree of C30B, and it improves the compatibility and the phase structure of polyurethane/epoxy resin interpenetrating polymer networks (PU/EP IPNs). The thermal stability improves compared to the polyurethane when the PU/EP IPN is formed. Mechanical properties including the Young’s modulus and tensile strength reflected marked improvement with addition of C30B.Keywords: castor oil, epoxy, montmorillonite, polyurethane
Procedia PDF Downloads 4002488 Geographic Information System and Ecotourism Sites Identification of Jamui District, Bihar, India
Authors: Anshu Anshu
Abstract:
In the red corridor famed for the Left Wing Extremism, lies small district of Jamui in Bihar, India. The district lies at 24º20´ N latitude and 86º13´ E longitude, covering an area of 3,122.8 km2 The undulating topography, with widespread forests provides pristine environment for invigorating experience of tourists. Natural landscape in form of forests, wildlife, rivers, and cultural landscape dotted with historical and religious places is highly purposive for tourism. The study is primarily related to the identification of potential ecotourism sites, using Geographic Information System. Data preparation, analysis and finally identification of ecotourism sites is done. Secondary data used is Survey of India Topographical Sheets with R.F.1:50,000 covering the area of Jamui district. District Census Handbook, Census of India, 2011; ERDAS Imagine and Arc View is used for digitization and the creation of DEM’s (Digital Elevation Model) of the district, depicting the relief and topography and generate thematic maps. The thematic maps have been refined using the geo-processing tools. Buffer technique has been used for the accessibility analysis. Finally, all the maps, including the Buffer maps were overlaid to find out the areas which have potential for the development of ecotourism sites in the Jamui district. Spatial data - relief, slopes, settlements, transport network and forests of Jamui District were marked and identified, followed by Buffer Analysis that was used to find out the accessibility of features like roads, railway stations to the sites available for the development of ecotourism destinations. Buffer analysis is also carried out to get the spatial proximity of major river banks, lakes, and dam sites to be selected for promoting sustainable ecotourism. Overlay Analysis is conducted using the geo-processing tools. Digital Terrain Model (DEM) generated and relevant themes like roads, forest areas and settlements were draped on the DEM to make an assessment of the topography and other land uses of district to delineate potential zones of ecotourism development. Development of ecotourism in Jamui faces several challenges. The district lies in the portion of Bihar that is part of ‘red corridor’ of India. The hills and dense forests are the prominent hideouts and training ground for the extremists. It is well known that any kind of political instability, war, acts of violence directly influence the travel propensity and hinders all kind of non-essential travels to these areas. The development of ecotourism in the district can bring change and overall growth in this area with communities getting more involved in economically sustainable activities. It is a known fact that poverty and social exclusion are the main force that pushes people, resorting towards violence. All over the world tourism has been used as a tool to eradicate poverty and generate good will among people. Tourism, in sustainable form should be promoted in the district to integrate local communities in the development process and to distribute fruits of development with equity.Keywords: buffer analysis, digital elevation model, ecotourism, red corridor
Procedia PDF Downloads 2592487 Frequency Distribution and Assertive Object Theory: An Exploration of the Late Bronze Age Italian Ceramic Landscape
Authors: Sara Fioretti
Abstract:
In the 2nd millennium BCE, maritime networks became essential to the Mediterranean lifestyle, creating an interconnected world. This phenomenon of interconnected cultures has often been misinterpreted as an “effect” of the Mycenaean “influence” without considering the complexity and role of regional and cross-cultural exchanges. This paper explores the socio-economic relationships, in both cross-cultural and potentially inter-regional settings, present within the archaeological repertoire of the southern Italian Late Bronze Age (LBA 1600 -1140 BCE). The emergence of economic relations within the connectivity of the regional settlements is explored through ceramic contexts found in the case studies Punta di Zambrone, Broglio di Trebisacce, and Nuraghe Antigori. This work-in-progress research is situated in the shifting theoretical views of the last ten years that discuss the Late Bronze Age’s connectivity through Social Networks, Entanglement, and Assertive Objects combined with a comparative statistical study of ceramic frequency distribution. Applying these theoretical frameworks with a quantitative approach demonstrates the specific regional economic relationships that shaped the cultural interactions of the Late Bronze Age. Through this intersection of theory and statistical analysis, the case studies establish a small percentage of pottery as imported, whilst assertive productions have a relatively higher quantity. Overall, the majority still adheres to regional Italian traditions. Therefore, we can dissect the rhizomatic relationships cultivated by the Italian coasts and Mycenaeans and their roles within their networks through the intersection of theoretical and statistical analysis. This research offers a new perspective on the connectivity of the Late Bronze Age relational structures.Keywords: late bronze age, mediterranean archaeology, exchanges and trade, frequency distribution of ceramic assemblages
Procedia PDF Downloads 412486 Developing Improvements to Multi-Hazard Risk Assessments
Authors: A. Fathianpour, M. B. Jelodar, S. Wilkinson
Abstract:
This paper outlines the approaches taken to assess multi-hazard assessments. There is currently confusion in assessing multi-hazard impacts, and so this study aims to determine which of the available options are the most useful. The paper uses an international literature search, and analysis of current multi-hazard assessments and a case study to illustrate the effectiveness of the chosen method. Findings from this study will help those wanting to assess multi-hazards to undertake a straightforward approach. The paper is significant as it helps to interpret the various approaches and concludes with the preferred method. Many people in the world live in hazardous environments and are susceptible to disasters. Unfortunately, when a disaster strikes it is often compounded by additional cascading hazards, thus people would confront more than one hazard simultaneously. Hazards include natural hazards (earthquakes, floods, etc.) or cascading human-made hazards (for example, Natural Hazard Triggering Technological disasters (Natech) such as fire, explosion, toxic release). Multi-hazards have a more destructive impact on urban areas than one hazard alone. In addition, climate change is creating links between different disasters such as causing landslide dams and debris flows leading to more destructive incidents. Much of the prevailing literature deals with only one hazard at a time. However, recently sophisticated multi-hazard assessments have started to appear. Given that multi-hazards occur, it is essential to take multi-hazard risk assessment under consideration. This paper aims to review the multi-hazard assessment methods through articles published to date and categorize the strengths and disadvantages of using these methods in risk assessment. Napier City is selected as a case study to demonstrate the necessity of using multi-hazard risk assessments. In order to assess multi-hazard risk assessments, first, the current multi-hazard risk assessment methods were described. Next, the drawbacks of these multi-hazard risk assessments were outlined. Finally, the improvements to current multi-hazard risk assessments to date were summarised. Generally, the main problem of multi-hazard risk assessment is to make a valid assumption of risk from the interactions of different hazards. Currently, risk assessment studies have started to assess multi-hazard situations, but drawbacks such as uncertainty and lack of data show the necessity for more precise risk assessment. It should be noted that ignoring or partial considering multi-hazards in risk assessment will lead to an overestimate or overlook in resilient and recovery action managements.Keywords: cascading hazards, disaster assessment, mullti-hazards, risk assessment
Procedia PDF Downloads 1122485 Neural Network Approach For Clustering Host Community: Based on Perceptions Toward Tourism, Their Satisfaction Level and Demographic Attributes in Iran (Lahijan)
Authors: Nasibeh Mohammadpour, Ali Rajabzadeh, Adel Azar, Hamid Zargham Borujeni,
Abstract:
Generally, various industries development depends on their stakeholders and beneficiaries supports. One of the most important stakeholders in tourism industry ( which has become one of the most important lucrative and employment-generating activities at the international level these days) are host communities in tourist destination which are affected and effect on this industry development. Recognizing host community and its segmentations can be important to get their support for future decisions and policy making. In order to identify these segments, in this study, clustering of the residents has been done by using some tools that are designed to encounter human complexities and have ability to model and generalize complex systems without any needs for the initial clusters’ seeds like classic methods. Neural networks can help to meet these expectations. The research have been planned to design neural networks-based mathematical model for clustering the host community effectively according to multi criteria, and identifies differences among segments. In order to achieve this goal, the residents’ segmentation has been done by demographic characteristics, their attitude towards the tourism development, the level of satisfaction and the type of their support in this field. The applied method is self-organized neural networks and the results have compared with K-means. As the results show, the use of Self- Organized Map (SOM) method provides much better results by considering the Cophenetic correlation and between clusters variance coefficients. Based on these criteria, the host community is divided into five sections with unique and distinctive features, which are in the best condition (in comparison other modes) according to Cophenetic correlation coefficient of 0.8769 and between clusters variance of 0.1412.Keywords: Artificial Nural Network, Clustering , Resident, SOM, Tourism
Procedia PDF Downloads 1832484 Trajectories of PTSD from 2-3 Years to 5-6 Years among Asian Americans after the World Trade Center Attack
Authors: Winnie Kung, Xinhua Liu, Debbie Huang, Patricia Kim, Keon Kim, Xiaoran Wang, Lawrence Yang
Abstract:
Considerable Asian Americans were exposed to the World Trade Center attack due to the proximity of the site to Chinatown and a sizeable number of South Asians working in the collapsed and damaged buildings nearby. Few studies focused on Asians in examining the disaster’s mental health impact, and even less longitudinal studies were reported beyond the first couple of years after the event. Based on the World Trade Center Health Registry, this study examined the trajectory of PTSD of individuals directly exposed to the attack from 2-3 to 5-6 years after the attack, comparing Asians against the non-Hispanic White group. Participants included 2,431 Asians and 31,455 Whites. Trajectories were delineated into the resilient, chronic, delayed-onset and remitted groups using PTSD checklist cut-off score at 44 at the 2 waves. Logistic regression analyses were conducted to compare the poorer trajectories against the resilient as a reference group, using predictors of baseline sociodemographic, exposure to the disaster, lower respiratory symptoms and previous depression/anxiety disorder diagnosis, and recruitment source as the control variable. Asians had significant lower socioeconomic status in terms of income, education and employment status compared to Whites. Over 3/4 of participants from both races were resilient, though slightly less for Asians than Whites (76.5% vs 79.8%). Asians had a higher proportion with chronic PTSD (8.6% vs 7.4%) and remission (5.9% vs 3.4%) than Whites. A considerable proportion of participants had delayed-onset in both races (9.1% Asians vs 9.4% Whites). The distribution of trajectories differed significantly by race (p<0.0001) with Asians faring poorer. For Asians, in the chronic vs resilient group, significant protective factors included age >65, annual household income >$50,000, and never married vs married/cohabiting; risk factors were direct disaster exposure, job loss due to 9/11, lost someone, and tangible loss; lower respiratory symptoms and previous mental disorder diagnoses. Similar protective and risk factors were noted for the delayed-onset group, except education being protective; and being an immigrant a risk. Between the 2 comparisons, the chronic group was more vulnerable than the delayed-onset as expected. It should also be noted that in both comparisons, Asians’ current employment status had no significant impact on their PTSD trajectory. Comparing between Asians against Whites, the direction of the relationships between the predictors and the PTSD trajectories were mostly the same, although more factors were significant for Whites than for Asians. A few factors showed significant racial difference: Higher risk for lower respiratory symptoms for Whites than Asians, higher risk for pre-9/11 mental disorder diagnosis for Asians than Whites, and immigrant a risk factor for the remitted vs resilient groups for Whites but not for Asians. Over 17% Asians still suffered from PTSD 5-6 years after the WTC attack signified its persistent impact which incurred substantial human, social and economic costs. The more disadvantaged socioeconomic status of Asians rendered them more vulnerable in their mental health trajectories relative to Whites. Together with their well-documented low tendency to seek mental health help, outreach effort to this population is needed to ensure follow-up treatment and prevention.Keywords: PTSD, Asian Americans, World Trade Center Attack, racial differences
Procedia PDF Downloads 2642483 High-Capacity Image Steganography using Wavelet-based Fusion on Deep Convolutional Neural Networks
Authors: Amal Khalifa, Nicolas Vana Santos
Abstract:
Steganography has been known for centuries as an efficient approach for covert communication. Due to its popularity and ease of access, image steganography has attracted researchers to find secure techniques for hiding information within an innocent looking cover image. In this research, we propose a novel deep-learning approach to digital image steganography. The proposed method, DeepWaveletFusion, uses convolutional neural networks (CNN) to hide a secret image into a cover image of the same size. Two CNNs are trained back-to-back to merge the Discrete Wavelet Transform (DWT) of both colored images and eventually be able to blindly extract the hidden image. Based on two different image similarity metrics, a weighted gain function is used to guide the learning process and maximize the quality of the retrieved secret image and yet maintaining acceptable imperceptibility. Experimental results verified the high recoverability of DeepWaveletFusion which outperformed similar deep-learning-based methods.Keywords: deep learning, steganography, image, discrete wavelet transform, fusion
Procedia PDF Downloads 902482 Design an Intelligent Fire Detection System Based on Neural Network and Particle Swarm Optimization
Authors: Majid Arvan, Peyman Beygi, Sina Rokhsati
Abstract:
In-time detection of fire in buildings is of great importance. Employing intelligent methods in data processing in fire detection systems leads to a significant reduction of fire damage at lowest cost. In this paper, the raw data obtained from the fire detection sensor networks in buildings is processed by using intelligent methods based on neural networks and the likelihood of fire happening is predicted. In order to enhance the quality of system, the noise in the sensor data is reduced by analyzing wavelets and applying SVD technique. Meanwhile, the proposed neural network is trained using particle swarm optimization (PSO). In the simulation work, the data is collected from sensor network inside the room and applied to the proposed network. Then the outputs are compared with conventional MLP network. The simulation results represent the superiority of the proposed method over the conventional one.Keywords: intelligent fire detection, neural network, particle swarm optimization, fire sensor network
Procedia PDF Downloads 3802481 Functional Connectivity Signatures of Polygenic Depression Risk in Youth
Authors: Louise Moles, Steve Riley, Sarah D. Lichenstein, Marzieh Babaeianjelodar, Robert Kohler, Annie Cheng, Corey Horien Abigail Greene, Wenjing Luo, Jonathan Ahern, Bohan Xu, Yize Zhao, Chun Chieh Fan, R. Todd Constable, Sarah W. Yip
Abstract:
Background: Risks for depression are myriad and include both genetic and brain-based factors. However, relationships between these systems are poorly understood, limiting understanding of disease etiology, particularly at the developmental level. Methods: We use a data-driven machine learning approach connectome-based predictive modeling (CPM) to identify functional connectivity signatures associated with polygenic risk scores for depression (DEP-PRS) among youth from the Adolescent Brain and Cognitive Development (ABCD) study across diverse brain states, i.e., during resting state, during affective working memory, during response inhibition, during reward processing. Results: Using 10-fold cross-validation with 100 iterations and permutation testing, CPM identified connectivity signatures of DEP-PRS across all examined brain states (rho’s=0.20-0.27, p’s<.001). Across brain states, DEP-PRS was positively predicted by increased connectivity between frontoparietal and salience networks, increased motor-sensory network connectivity, decreased salience to subcortical connectivity, and decreased subcortical to motor-sensory connectivity. Subsampling analyses demonstrated that model accuracies were robust across random subsamples of N’s=1,000, N’s=500, and N’s=250 but became unstable at N’s=100. Conclusions: These data, for the first time, identify neural networks of polygenic depression risk in a large sample of youth before the onset of significant clinical impairment. Identified networks may be considered potential treatment targets or vulnerability markers for depression risk.Keywords: genetics, functional connectivity, pre-adolescents, depression
Procedia PDF Downloads 582480 A Survey on Traditional Mac Layer Protocols in Cognitive Wireless Mesh Networks
Authors: Anusha M., V. Srikanth
Abstract:
Maximizing spectrum usage and numerous applications of the wireless communication networks have forced to a high interest of available spectrum. Cognitive Radio control its receiver and transmitter features exactly so that they can utilize the vacant approved spectrum without impacting the functionality of the principal licensed users. The Use of various channels assists to address interferences thereby improves the whole network efficiency. The MAC protocol in cognitive radio network explains the spectrum usage by interacting with multiple channels among the users. In this paper we studied about the architecture of cognitive wireless mesh network and traditional TDMA dependent MAC method to allocate channels dynamically. The majority of the MAC protocols suggested in the research are operated on Common-Control-Channel (CCC) to handle the services between Cognitive Radio secondary users. In this paper, an extensive study of Multi-Channel Multi-Radios or frequency range channel allotment and continually synchronized TDMA scheduling are shown in summarized way.Keywords: TDMA, MAC, multi-channel, multi-radio, WMN’S, cognitive radios
Procedia PDF Downloads 5612479 Artificial Neural Networks Face to Sudden Load Change for Shunt Active Power Filter
Authors: Dehini Rachid, Ferdi Brahim
Abstract:
The shunt active power filter (SAPF) is not destined only to improve the power factor, but also to compensate the unwanted harmonic currents produced by nonlinear loads. This paper presents a SAPF with identification and control method based on artificial neural network (ANN). To identify harmonics, many techniques are used, among them the conventional p-q theory and the relatively recent one the artificial neural network method. It is difficult to get satisfied identification and control characteristics by using a normal (ANN) due to the nonlinearity of the system (SAPF + fast nonlinear load variations). This work is an attempt to undertake a systematic study of the problem to equip the (SAPF) with the harmonics identification and DC link voltage control method based on (ANN). The latter has been applied to the (SAPF) with fast nonlinear load variations. The results of computer simulations and experiments are given, which can confirm the feasibility of the proposed active power filter.Keywords: artificial neural networks (ANN), p-q theory, harmonics, total harmonic distortion
Procedia PDF Downloads 3862478 Optimizing Super Resolution Generative Adversarial Networks for Resource-Efficient Single-Image Super-Resolution via Knowledge Distillation and Weight Pruning
Authors: Hussain Sajid, Jung-Hun Shin, Kum-Won Cho
Abstract:
Image super-resolution is the most common computer vision problem with many important applications. Generative adversarial networks (GANs) have promoted remarkable advances in single-image super-resolution (SR) by recovering photo-realistic images. However, high memory requirements of GAN-based SR (mainly generators) lead to performance degradation and increased energy consumption, making it difficult to implement it onto resource-constricted devices. To relieve such a problem, In this paper, we introduce an optimized and highly efficient architecture for SR-GAN (generator) model by utilizing model compression techniques such as Knowledge Distillation and pruning, which work together to reduce the storage requirement of the model also increase in their performance. Our method begins with distilling the knowledge from a large pre-trained model to a lightweight model using different loss functions. Then, iterative weight pruning is applied to the distilled model to remove less significant weights based on their magnitude, resulting in a sparser network. Knowledge Distillation reduces the model size by 40%; pruning then reduces it further by 18%. To accelerate the learning process, we employ the Horovod framework for distributed training on a cluster of 2 nodes, each with 8 GPUs, resulting in improved training performance and faster convergence. Experimental results on various benchmarks demonstrate that the proposed compressed model significantly outperforms state-of-the-art methods in terms of peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and image quality for x4 super-resolution tasks.Keywords: single-image super-resolution, generative adversarial networks, knowledge distillation, pruning
Procedia PDF Downloads 962477 Machine Learning Techniques in Bank Credit Analysis
Authors: Fernanda M. Assef, Maria Teresinha A. Steiner
Abstract:
The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.Keywords: artificial neural networks (ANNs), classifier algorithms, credit risk assessment, logistic regression, machine Learning, support vector machines
Procedia PDF Downloads 1032476 Neural Networks Underlying the Generation of Neural Sequences in the HVC
Authors: Zeina Bou Diab, Arij Daou
Abstract:
The neural mechanisms of sequential behaviors are intensively studied, with songbirds a focus for learned vocal production. We are studying the premotor nucleus HVC at a nexus of multiple pathways contributing to song learning and production. The HVC consists of multiple classes of neuronal populations, each has its own cellular, electrophysiological and functional properties. During singing, a large subset of motor cortex analog-projecting HVCRA neurons emit a single 6-10 ms burst of spikes at the same time during each rendition of song, a large subset of basal ganglia-projecting HVCX neurons fire 1 to 4 bursts that are similarly time locked to vocalizations, while HVCINT neurons fire tonically at average high frequency throughout song with prominent modulations whose timing in relation to song remains unresolved. This opens the opportunity to define models relating explicit HVC circuitry to how these neurons work cooperatively to control learning and singing. We developed conductance-based Hodgkin-Huxley models for the three classes of HVC neurons (based on the ion channels previously identified from in vitro recordings) and connected them in several physiologically realistic networks (based on the known synaptic connectivity and specific glutaminergic and gabaergic pharmacology) via different architecture patterning scenarios with the aim to replicate the in vivo firing patterning behaviors. We are able, through these networks, to reproduce the in vivo behavior of each class of HVC neurons, as shown by the experimental recordings. The different network architectures developed highlight different mechanisms that might be contributing to the propagation of sequential neural activity (continuous or punctate) in the HVC and to the distinctive firing patterns that each class exhibits during singing. Examples of such possible mechanisms include: 1) post-inhibitory rebound in HVCX and their population patterns during singing, 2) different subclasses of HVCINT interacting via inhibitory-inhibitory loops, 3) mono-synaptic HVCX to HVCRA excitatory connectivity, and 4) structured many-to-one inhibitory synapses from interneurons to projection neurons, and others. Replication is only a preliminary step that must be followed by model prediction and testing.Keywords: computational modeling, neural networks, temporal neural sequences, ionic currents, songbird
Procedia PDF Downloads 702475 A Framework for Strategy Development in Small Companies: A Case Study of a Telecommunication Firm
Authors: Maryam Goodarzi, Mahdieh Sheikhi, Mehdi Goodarzi
Abstract:
This study intends to offer an appropriate strategy development framework for a telecommunication firm (as a case study) which works on Information and Communication Technology (ICT) projects, development of telecommunication networks, and maintenance of local networks, according to its dominant condition. In this approach, first, the objectives were set and the mission was defined. Then, the capability was assessed by SWOT matrix. Using SPACE matrix, the strategy of the company was determined. The strategic direction is set and an appropriate and superior strategy was developed and offered employing QSPM matrix. The theoretical framework or conceptual model of the present study first involves 4 stages of framework development and then from stage 3 (assessing capability) onward, a strategic management model by Fred R. David. In this respect, the tools and methods offered in the framework are appropriate for all kinds of organizations, particularly small firms, and help strategists identify, evaluate, and select strategies.Keywords: strategy formulation, firm mission, strategic direction, space diagram, quantitative strategic planning matrix, SWOT matrix
Procedia PDF Downloads 3742474 Successful Public-Private Partnership Through the Impact of Environmental Education: A Case Study on Transforming Community Confrict into Harmony in the Dongpian Community
Authors: Men An Pan, Ho Hsiung Huang, Jui Chuan Lin, Tsui Hsun Wu, Hsing Yuan Yen
Abstract:
Pingtung County, located in the southernmost region of Taiwan, has the largest number of pig farms in the country. In the past, livestock operators in Dongpian Village discharged their wastewater into the nearby water bodies, causing water pollution in the local rivers and polluting the air with the stench of the pig excrement. These resulted in many complaints from the local residents. In response to a long time fighting back of the community against the livestock farms due to the confrict, the County Government's Environmental Protection Bureau (PTEPB) examined potential wayouts in addition to heavy fines to the perpetrators. Through helping the livestock farms to upgrade their pollution prevention equipment, promoting the reuse of biogas residue and slurry from the pig excrement, and environmental education, the confrict was successfully resolved. The properly treated wastewater from the livestock farms has been freely provided to the neighboring farmlands via pipelines and tankers. Thus, extensive cultivation of bananas, papaya, red dragon fruit, Inca nut, and cocoa has resulted in 34% resource utilization of biogas residue as a fertilizer. This has encouraged farmers to reduce chemical fertilizers and use microbial materials like photosynthetic bacteria after banning herbicides while lowering the cost of wastewater treatment in livestock farms and alleviating environmental pollution simultaneously. That is, the livestock farms fully demonstrate the determination to fulfill their corporate social responsibility (CSR). Due to the success, Eight farms jointly established a social enterprise - "Dongpian Gemstone Village Co., Ltd." to promote organic farming through a "shared farm." The company appropriates 5% of its total revenue back to the community through caregiving services for the elderly and a fund for young local farmers. The community adopted the Satoyama Initiative in accordance with the Conference of the CBD COP10. Through the positive impact of environmental education, the community seeks to realize the coexistence between society and nature while maintaining and developing socio-economic activities (including agriculture) with respect for nature and building a harmonic relationship between humans and nature. By way of sustainable management of resources and ensuring biodiversity, the community is transforming into a socio-ecological production landscape. Apart from nature conservation and watercourse ecology, preserving local culture is also a key focus of the environmental education. To mitigate the impact of global warming and climate change, the community and the government have worked together to develop a disaster prevention and relief system, strive to establish a low-carbon emitting homeland, and become a model for resilient communities. By the power of environmental education, this community has turned its residents’ hearts and minds into concrete action, fulfilled social responsibility, and moved towards realizing the UN SDGs. Even though it is not the only community to integrate government agencies, research institutions, and NGOs for environmental education, it is a prime example of a low-carbon sustainable community that achieves more than 9 SDGs, including responsible consumption and production, climate change action, and diverse partnerships. The community is also leveraging environmental education to become a net-zero carbon community targeted by COP26.Keywords: environmental education, biogas residue, biogas slurry, CSR, SDGs, climate change, net-zero carbon emissions
Procedia PDF Downloads 143