Search results for: spatial integration degree
2444 A Consumption-Based Hybrid Life Cycle Assessment of Carbon Footprints in California: High Footprints in Small Urban Households
Authors: Jukka Heinonen
Abstract:
Higher density reduces distances, private car dependency and thus reduces greenhouse gas emissions (GHGs). As a result, increased density has been given a central role among urban development targets. However, it is not just travel behavior that changes along with density. Rather, the consumption patterns, or overall lifestyles, change along with changing urban structure, particularly with changing housing types and consumption opportunities. Furthermore, elevated consumption of services, more frequent flying and less intra-household sharing have been shown to potentially outweigh the gains from reduced driving in more dense urban settlements. In this study, the geography of carbon footprints (CFs) in California is analyzed paying close attention to the household size differences and the resulting economies-of-scale advantages and disadvantages. A hybrid life cycle assessment (LCA) framework is employed together with consumer expenditure data to assess the CFs. According to the study, small urban households have the highest CFs in California. Their transport related emissions are significantly lower than those of the residents of less urbanized areas, but higher emissions from other consumption categories, together with the low degree of sharing of goods, overweigh the gains. Two functional units, per capita and per household, are used to analyze the CFs and to demonstrate the importance of household size. The lifestyle impacts visible through the consumption data are also discussed. The study suggests that there are still significant gaps in our understanding of the premises of low-carbon human settlements.Keywords: carbon footprint, life cycle assessment, lifestyle, household size, consumption, economies-of-scale
Procedia PDF Downloads 3552443 Very Large Scale Integration Architecture of Finite Impulse Response Filter Implementation Using Retiming Technique
Authors: S. Jalaja, A. M. Vijaya Prakash
Abstract:
Recursive combination of an algorithm based on Karatsuba multiplication is exploited to design a generalized transpose and parallel Finite Impulse Response (FIR) Filter. Mid-range Karatsuba multiplication and Carry Save adder based on Karatsuba multiplication reduce time complexity for higher order multiplication implemented up to n-bit. As a result, we design modified N-tap Transpose and Parallel Symmetric FIR Filter Structure using Karatsuba algorithm. The mathematical formulation of the FFA Filter is derived. The proposed architecture involves significantly less area delay product (APD) then the existing block implementation. By adopting retiming technique, hardware cost is reduced further. The filter architecture is designed by using 90 nm technology library and is implemented by using cadence EDA Tool. The synthesized result shows better performance for different word length and block size. The design achieves switching activity reduction and low power consumption by applying with and without retiming for different combination of the circuit. The proposed structure achieves more than a half of the power reduction by adopting with and without retiming techniques compared to the earlier design structure. As a proof of the concept for block size 16 and filter length 64 for CKA method, it achieves a 51% as well as 70% less power by applying retiming technique, and for CSA method it achieves a 57% as well as 77% less power by applying retiming technique compared to the previously proposed design.Keywords: carry save adder Karatsuba multiplication, mid range Karatsuba multiplication, modified FFA and transposed filter, retiming
Procedia PDF Downloads 2352442 Review of the Road Crash Data Availability in Iraq
Authors: Abeer K. Jameel, Harry Evdorides
Abstract:
Iraq is a middle income country where the road safety issue is considered one of the leading causes of deaths. To control the road risk issue, the Iraqi Ministry of Planning, General Statistical Organization started to organise a collection system of traffic accidents data with details related to their causes and severity. These data are published as an annual report. In this paper, a review of the available crash data in Iraq will be presented. The available data represent the rate of accidents in aggregated level and classified according to their types, road users’ details, and crash severity, type of vehicles, causes and number of causalities. The review is according to the types of models used in road safety studies and research, and according to the required road safety data in the road constructions tasks. The available data are also compared with the road safety dataset published in the United Kingdom as an example of developed country. It is concluded that the data in Iraq are suitable for descriptive and exploratory models, aggregated level comparison analysis, and evaluation and monitoring the progress of the overall traffic safety performance. However, important traffic safety studies require disaggregated level of data and details related to the factors of the likelihood of traffic crashes. Some studies require spatial geographic details such as the location of the accidents which is essential in ranking the roads according to their level of safety, and name the most dangerous roads in Iraq which requires tactic plan to control this issue. Global Road safety agencies interested in solve this problem in low and middle-income countries have designed road safety assessment methodologies which are basing on the road attributes data only. Therefore, in this research it is recommended to use one of these methodologies.Keywords: road safety, Iraq, crash data, road risk assessment, The International Road Assessment Program (iRAP)
Procedia PDF Downloads 2562441 The Culex Pipiens Niche: Assessment with Climatic and Physiographic Variables via a Geographic Information System
Authors: Maria C. Proença, Maria T. Rebelo, Marília Antunes, Maria J. Alves, Hugo Osório, Sofia Cunha, João Casaca
Abstract:
Using a geographic information system (GIS), the relations between a georeferenced data set of Culex pipiens sl. mosquitoes collected in Portugal mainland during seven years (2006-2012) and meteorological and physiographic parameters such as: air relative humidity, air temperature (minima, maxima and mean daily temperatures), daily total rainfall, altitude, land use/land cover and proximity to water bodies are evaluated. Focus is on the mosquito females; the characterization of its habitat is the key for the planning of chirurgical non-aggressive prophylactic countermeasures to avoid ambient degradation. The GIS allow for the spatial determination of the zones were the mosquito mean captures has been above average; using the meteorological values at these coordinates, the limits of each parameter are identified/computed. The meteorological parameters measured at the net of weather stations all over the country are averaged by month and interpolated to produce raster maps that can be segmented according to the thresholds obtained for each parameter. The intersection of the maps obtained for each month show the evolution of the area favorable to the species through the mosquito season, which is from May to October at these latitudes. In parallel, mean and above average captures were related to the physiographic parameters. Three levels of risk could be identified for each parameter, using above average captures as an index. The results were applied to the suitability meteorological maps of each month. The Culex pipiens critical niche is delimited, reflecting the critical areas and the level of risk for transmission of the pathogens to which they are competent vectors (West Nile virus, iridoviruses, rheoviruses and parvoviruses).Keywords: Culex pipiens, ecological niche, risk assessment, risk management
Procedia PDF Downloads 5442440 Development of a Web-Based Application for Intelligent Fertilizer Management in Rice Cultivation
Authors: Hao-Wei Fu, Chung-Feng Kao
Abstract:
In the era of rapid technological advancement, information technology (IT) has become integral to modern life, exerting significant influence across diverse sectors and serving as a catalyst for development in various industries. Within agriculture, the integration of IT offers substantial benefits, notably enhancing operational efficiency. Real-time monitoring systems, for instance, have been widely embraced in agriculture, effectively improving crop management practices. This study specifically addresses the management of rice panicle fertilizer, presenting the development of a web application tailored to handle data associated with rice panicle fertilizer management. Leveraging the normalized difference red edge index, this application optimizes the quantity of rice panicle fertilizer used, providing recommendations to agricultural stakeholders and service providers in the agricultural information sector. The overarching objective is to minimize costs while maximizing yields. Furthermore, a robust database system has been established to store and manage relevant data for future reference in rice cultivation management. Additionally, the study utilizes the Representational State Transfer software architectural style to construct an application programming interface (API), facilitating data creation, retrieval, updating, and deletion for users via the HyperText Transfer Protocol methods. Future plans involve integrating this API with third-party services to incorporate it into larger frameworks, thus catering to the diverse requirements of various third-party services.Keywords: application programming interface, HyperText Transfer Protocol, nitrogen fertilizer intelligent management, web-based application
Procedia PDF Downloads 612439 Accurate Binding Energy of Ytterbium Dimer from Ab Initio Calculations and Ultracold Photoassociation Spectroscopy
Authors: Giorgio Visentin, Alexei A. Buchachenko
Abstract:
Recent proposals to use Yb dimer as an optical clock and as a sensor for non-Newtonian gravity imply the knowledge of its interaction potential. Here, the ground-state Born-Oppenheimer Yb₂ potential energy curve is represented by a semi-analytical function, consisting of short- and long-range contributions. For the former, the systematic ab initio all-electron exact 2-component scalar-relativistic CCSD(T) calculations are carried out. Special care is taken to saturate diffuse basis set component with the atom- and bond-centered primitives and reach the complete basis set limit through n = D, T, Q sequence of the correlation-consistent polarized n-zeta basis sets. Similar approaches are used to the long-range dipole and quadrupole dispersion terms by implementing the CCSD(3) polarization propagator method for dynamic polarizabilities. Dispersion coefficients are then computed through Casimir-Polder integration. The semiclassical constraint on the number of the bound vibrational levels known for the ¹⁷⁴Yb isotope is used to scale the potential function. The scaling, based on the most accurate ab initio results, bounds the interaction energy of two Yb atoms within the narrow 734 ± 4 cm⁻¹ range, in reasonable agreement with the previous ab initio-based estimations. The resulting potentials can be used as the reference for more sophisticated models that go beyond the Born-Oppenheimer approximation and provide the means of their uncertainty estimations. The work is supported by Russian Science Foundation grant # 17-13-01466.Keywords: ab initio coupled cluster methods, interaction potential, semi-analytical function, ytterbium dimer
Procedia PDF Downloads 1542438 The Importance of Visual Communication in Artificial Intelligence
Authors: Manjitsingh Rajput
Abstract:
Visual communication plays an important role in artificial intelligence (AI) because it enables machines to understand and interpret visual information, similar to how humans do. This abstract explores the importance of visual communication in AI and emphasizes the importance of various applications such as computer vision, object emphasis recognition, image classification and autonomous systems. In going deeper, with deep learning techniques and neural networks that modify visual understanding, In addition to AI programming, the abstract discusses challenges facing visual interfaces for AI, such as data scarcity, domain optimization, and interpretability. Visual communication and other approaches, such as natural language processing and speech recognition, have also been explored. Overall, this abstract highlights the critical role that visual communication plays in advancing AI capabilities and enabling machines to perceive and understand the world around them. The abstract also explores the integration of visual communication with other modalities like natural language processing and speech recognition, emphasizing the critical role of visual communication in AI capabilities. This methodology explores the importance of visual communication in AI development and implementation, highlighting its potential to enhance the effectiveness and accessibility of AI systems. It provides a comprehensive approach to integrating visual elements into AI systems, making them more user-friendly and efficient. In conclusion, Visual communication is crucial in AI systems for object recognition, facial analysis, and augmented reality, but challenges like data quality, interpretability, and ethics must be addressed. Visual communication enhances user experience, decision-making, accessibility, and collaboration. Developers can integrate visual elements for efficient and accessible AI systems.Keywords: visual communication AI, computer vision, visual aid in communication, essence of visual communication.
Procedia PDF Downloads 952437 Identification of Spam Keywords Using Hierarchical Category in C2C E-Commerce
Authors: Shao Bo Cheng, Yong-Jin Han, Se Young Park, Seong-Bae Park
Abstract:
Consumer-to-Consumer (C2C) E-commerce has been growing at a very high speed in recent years. Since identical or nearly-same kinds of products compete one another by relying on keyword search in C2C E-commerce, some sellers describe their products with spam keywords that are popular but are not related to their products. Though such products get more chances to be retrieved and selected by consumers than those without spam keywords, the spam keywords mislead the consumers and waste their time. This problem has been reported in many commercial services like e-bay and taobao, but there have been little research to solve this problem. As a solution to this problem, this paper proposes a method to classify whether keywords of a product are spam or not. The proposed method assumes that a keyword for a given product is more reliable if the keyword is observed commonly in specifications of products which are the same or the same kind as the given product. This is because that a hierarchical category of a product in general determined precisely by a seller of the product and so is the specification of the product. Since higher layers of the hierarchical category represent more general kinds of products, a reliable degree is differently determined according to the layers. Hence, reliable degrees from different layers of a hierarchical category become features for keywords and they are used together with features only from specifications for classification of the keywords. Support Vector Machines are adopted as a basic classifier using the features, since it is powerful, and widely used in many classification tasks. In the experiments, the proposed method is evaluated with a golden standard dataset from Yi-han-wang, a Chinese C2C e-commerce, and is compared with a baseline method that does not consider the hierarchical category. The experimental results show that the proposed method outperforms the baseline in F1-measure, which proves that spam keywords are effectively identified by a hierarchical category in C2C e-commerce.Keywords: spam keyword, e-commerce, keyword features, spam filtering
Procedia PDF Downloads 2942436 Applying Laser Scanning and Digital Photogrammetry for Developing an Archaeological Model Structure for Old Castle in Germany
Authors: Bara' Al-Mistarehi
Abstract:
Documentation and assessment of conservation state of an archaeological structure is a significant procedure in any management plan. However, it has always been a challenge to apply this with a low coast and safe methodology. It is also a time-demanding procedure. Therefore, a low cost, efficient methodology for documenting the state of a structure is needed. In the scope of this research, this paper will employ digital photogrammetry and laser scanner to one of highly significant structures in Germany, The Old Castle (German: Altes Schloss). The site is well known for its unique features. However, the castle suffers from serious deterioration threats because of the environmental conditions and the absence of continuous monitoring, maintenance and repair plans. Digital photogrammetry is a generally accepted technique for the collection of 3D representations of the environment. For this reason, this image-based technique has been extensively used to produce high quality 3D models of heritage sites and historical buildings for documentation and presentation purposes. Additionally, terrestrial laser scanners are used, which directly measure 3D surface coordinates based on the run-time of reflected light pulses. These systems feature high data acquisition rates, good accuracy and high spatial data density. Despite the potential of each single approach, in this research work maximum benefit is to be expected by a combination of data from both digital cameras and terrestrial laser scanners. Within the paper, the usage, application and advantages of the technique will be investigated in terms of building high realistic 3D textured model for some parts of the old castle. The model will be used as diagnosing tool of the conservation state of the castle and monitoring mean for future changes.Keywords: Digital photogrammetry, Terrestrial laser scanners, 3D textured model, archaeological structure
Procedia PDF Downloads 1782435 Cybersecurity Assessment of Decentralized Autonomous Organizations in Smart Cities
Authors: Claire Biasco, Thaier Hayajneh
Abstract:
A smart city is the integration of digital technologies in urban environments to enhance the quality of life. Smart cities capture real-time information from devices, sensors, and network data to analyze and improve city functions such as traffic analysis, public safety, and environmental impacts. Current smart cities face controversy due to their reliance on real-time data tracking and surveillance. Internet of Things (IoT) devices and blockchain technology are converging to reshape smart city infrastructure away from its centralized model. Connecting IoT data to blockchain applications would create a peer-to-peer, decentralized model. Furthermore, blockchain technology powers the ability for IoT device data to shift from the ownership and control of centralized entities to individuals or communities with Decentralized Autonomous Organizations (DAOs). In the context of smart cities, DAOs can govern cyber-physical systems to have a greater influence over how urban services are being provided. This paper will explore how the core components of a smart city now apply to DAOs. We will also analyze different definitions of DAOs to determine their most important aspects in relation to smart cities. Both categorizations will provide a solid foundation to conduct a cybersecurity assessment of DAOs in smart cities. It will identify the benefits and risks of adopting DAOs as they currently operate. The paper will then provide several mitigation methods to combat cybersecurity risks of DAO integrations. Finally, we will give several insights into what challenges will be faced by DAO and blockchain spaces in the coming years before achieving a higher level of maturity.Keywords: blockchain, IoT, smart city, DAO
Procedia PDF Downloads 1212434 Genome Editing in Sorghum: Advancements and Future Possibilities: A Review
Authors: Micheale Yifter Weldemichael, Hailay Mehari Gebremedhn, Teklehaimanot Hailesslasie
Abstract:
The advancement of target-specific genome editing tools, including clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR-associated protein9 (Cas9), mega-nucleases, base editing (BE), prime editing (PE), transcription activator-like endonucleases (TALENs), and zinc-finger nucleases (ZFNs), have paved the way for a modern era of gene editing. CRISPR/Cas9, as a versatile, simple, cost-effective and robust system for genome editing, has dominated the genome manipulation field over the last few years. The application of CRISPR/Cas9 in sorghum improvement is particularly vital in the context of ecological, environmental and agricultural challenges, as well as global climate change. In this context, gene editing using CRISPR/Cas9 can improve nutritional value, yield, resistance to pests and disease and tolerance to different abiotic stress. Moreover, CRISPR/Cas9 can potentially perform complex editing to reshape already available elite varieties and new genetic variations. However, existing research is targeted at improving even further the effectiveness of the CRISPR/Cas9 genome editing techniques to fruitfully edit endogenous sorghum genes. These findings suggest that genome editing is a feasible and successful venture in sorghum. Newer improvements and developments of CRISPR/Cas9 techniques have further qualified researchers to modify extra genes in sorghum with improved efficiency. The fruitful application and development of CRISPR techniques for genome editing in sorghum will not only help in gene discovery, creating new, improved traits in sorghum regulating gene expression sorghum functional genomics, but also in making site-specific integration events.Keywords: CRISPR/Cas9, genome editing, quality, sorghum, stress, yield
Procedia PDF Downloads 592433 Characterizing Surface Machining-Induced Local Deformation Using Electron Backscatter Diffraction
Authors: Wenqian Zhang, Xuelin Wang, Yujin Hu, Siyang Wang
Abstract:
The subsurface layer of a component plays a significant role in its service performance. Any surface mechanical process during fabrication can introduce a deformed layer near the surface, which can be related to the microstructure alteration and strain hardening, and affects the mechanical properties and corrosion resistance of the material. However, there exists a great difficulty in determining the subsurface deformation induced by surface machining. In this study, electron backscatter diffraction (EBSD) was used to study the deformed layer of surface milled 316 stainless steel. The microstructure change was displayed by the EBSD maps and characterized by misorientation variation. The results revealed that the surface milling resulted in heavily nonuniform deformations in the subsurface layer and even in individual grains. The direction of the predominant grain deformation was about 30-60 deg to the machined surface. Moreover, a local deformation rate (LDR) was proposed to quantitatively evaluate the local deformation degree. Both of the average and maximum LDRs were utilized to characterize the deformation trend along the depth direction. It was revealed that the LDR had a strong correlation with the development of grain and sub-grain boundaries. In this work, a scan step size of 1.2 μm was chosen for the EBSD measurement. A LDR higher than 18 deg/μm indicated a newly developed grain boundary, while a LDR ranged from 2.4 to 18 deg/μm implied the generation of a sub-grain boundary. And a lower LDR than 2.4 deg/μm could only introduce a slighter deformation and no sub-grain boundary was produced. According to the LDR analysis with the evolution of grain or sub grain boundaries, the deformed layer could be classified into four zones: grain broken layer, seriously deformed layer, slightly deformed layer and non-deformed layer.Keywords: surface machining, EBSD, subsurface layer, local deformation
Procedia PDF Downloads 3312432 Evaluation of Pile Performance in Different Layers of Soil
Authors: Orod Zarrin, Mohesn Ramezan Shirazi, Hassan Moniri
Abstract:
The use of pile foundations technique is developed to support structures and buildings on soft soil. The most important dynamic load that can affect the pile structure is earthquake vibrations. Pile foundations during earthquake excitation indicate that piles are subject to damage by affecting the superstructure integrity and serviceability. During an earthquake, two types of stresses can damage the pile head, inertial load that is caused by superstructure and deformation which caused by the surrounding soil. Soil deformation and inertial load are associated with the acceleration developed in an earthquake. The acceleration amplitude at the ground surface depends on the magnitude of earthquakes, soil properties and seismic source distance. According to the investigation, the damage is between the liquefiable and non-liquefiable layers and also soft and stiff layers. This damage crushes the pile head by increasing the inertial load which is applied by the superstructure. On the other hand, the cracks on the piles due to the surrounding soil are directly related to the soil profile and causes cracks from small to large. However, the large cracks reason have been listed such as liquefaction, lateral spreading, and inertial load. In the field of designing, elastic response of piles is always a challenge for designer in liquefaction soil, by allowing deflection at top of piles. Moreover, absence of plastic hinges in piles should be insured, because the damage in the piles is not observed directly. In this study, the performance and behavior of pile foundations during liquefaction and lateral spreading are investigated. In addition, emphasize on the soil behavior in the liquefiable and non-liquefiable layers by different aspect of piles damage such as ranking, location and degree of damage are going to discuss.Keywords: pile, earthquake, liquefaction, non-liquefiable, damage
Procedia PDF Downloads 3012431 The Test of Memory Malingering and Offence Severity
Authors: Kenji Gwee
Abstract:
In Singapore, the death penalty remains in active use for murder and drug trafficking of controlled drugs such as heroin. As such, the psychological assessment of defendants can often be of high stakes. The Test of Memory Malingering (TOMM) is employed by government psychologists to determine the degree of effort invested by defendants, which in turn inform on the veracity of overall psychological findings that can invariably determine the life and death of defendants. The purpose of this study was to find out if defendants facing the death penalty were more likely to invest less effort during psychological assessment (to fake bad in hopes of escaping the death sentence) compared to defendants facing lesser penalties. An archival search of all forensic cases assessed in 2012-2013 by Singapore’s designated forensic psychiatric facility yielded 186 defendants’ TOMM scores. Offence severity, coded into 6 rank-ordered categories, was analyzed in a one-way ANOVA with TOMM score as the dependent variable. There was a statistically significant difference (F(5,87) = 2.473, p = 0.038). A Tukey post-hoc test with Bonferroni correction revealed that defendants facing lower charges (Theft, shoplifting, criminal breach of trust) invested less test-taking effort (TOMM = 37.4±12.3, p = 0.033) compared to those facing the death penalty (TOMM = 46.2±8.1). The surprising finding that those facing death penalties actually invested more test taking effort than those facing relatively minor charges could be due to higher levels of cooperation when faced with death. Alternatively, other legal avenues to escape the death sentence may have been preferred over the mitigatory chance of a psychiatric defence.Keywords: capital sentencing, offence severity, Singapore, Test of Memory Malingering
Procedia PDF Downloads 4342430 Synthesis and Characterization of Cellulose-Based Halloysite-Carbon Adsorbent
Authors: Laura Frydel, Piotr M. Slomkiewicz, Beata Szczepanik
Abstract:
Triclosan has been used as a disinfectant in many medical products, such as: hand disinfectant soaps, creams, mouthwashes, pastes and household cleaners. Due to its strong antimicrobial activity, triclosan is becoming more and more popular and the consumption of disinfectants with triclosan in it is increasing. As a result, this compound increasingly finds its way into waters and soils in an unchanged form, pollutes the environment and may have a negative effect on organisms. The aim of this study was to investigate the synthesis of cellulose-based halloysite-carbon adsorbent and perform its characterization. The template in the halloysite-carbon adsorbent was halloysite nanotubes and the carbon precursor was microcrystalline cellulose. Scanning electron microscope (SEM) images were obtained and the elementary composition (qualitative and quantitative) of the sample was determined by energy dispersion spectroscopy (EDS). The identification of the crystallographic composition of the halloysite nanotubes and the sample of the halloysite-carbon composite was carried out using the X-ray powder diffraction (XRPD) method. The FTIR spectra were acquired before and after the adsorption process in order to determine the functional groups on the adsorbent surface and confirm the interactions between adsorbent and adsorbate molecules. The parameters of the porous structure of the adsorbent, such as the specific surface area (Brunauer-Emmett-Teller method), the total pore volume and the volume of mesopores and micropores were determined. Total carbon and total organic carbon were also determined in the samples. A cellulose-based halloysite-carbon adsorbent was used to remove triclosan from water. The degree of removal of triclosan from water was approximately 90%. The results indicate that the halloysite-carbon composite can be successfully used as an effective adsorbent for removing triclosan from water.Keywords: Adsorption, cellulose, halloysite, triclosan
Procedia PDF Downloads 1282429 Evaluation of NASA POWER and CRU Precipitation and Temperature Datasets over a Desert-prone Yobe River Basin: An Investigation of the Impact of Drought in the North-East Arid Zone of Nigeria
Authors: Yusuf Dawa Sidi, Abdulrahman Bulama Bizi
Abstract:
The most dependable and precise source of climate data is often gauge observation. However, long-term records of gauge observations, on the other hand, are unavailable in many regions around the world. In recent years, a number of gridded climate datasets with high spatial and temporal resolutions have emerged as viable alternatives to gauge-based measurements. However, it is crucial to thoroughly evaluate their performance prior to utilising them in hydroclimatic applications. Therefore, this study aims to assess the effectiveness of NASA Prediction of Worldwide Energy Resources (NASA POWER) and Climate Research Unit (CRU) datasets in accurately estimating precipitation and temperature patterns within the dry region of Nigeria from 1990 to 2020. The study employs widely used statistical metrics and the Standardised Precipitation Index (SPI) to effectively capture the monthly variability of precipitation and temperature and inter-annual anomalies in rainfall. The findings suggest that CRU exhibited superior performance compared to NASA POWER in terms of monthly precipitation and minimum and maximum temperatures, demonstrating a high correlation and much lower error values for both RMSE and MAE. Nevertheless, NASA POWER has exhibited a moderate agreement with gauge observations in accurately replicating monthly precipitation. The analysis of the SPI reveals that the CRU product exhibits superior performance compared to NASA POWER in accurately reflecting inter-annual variations in rainfall anomalies. The findings of this study indicate that the CRU gridded product is often regarded as the most favourable gridded precipitation product.Keywords: CRU, climate change, precipitation, SPI, temperature
Procedia PDF Downloads 892428 Evaluation of Easy-to-Use Energy Building Design Tools for Solar Access Analysis in Urban Contexts: Comparison of Friendly Simulation Design Tools for Architectural Practice in the Early Design Stage
Abstract:
Current building sector is focused on reduction of energy requirements, on renewable energy generation and on regeneration of existing urban areas. These targets need to be solved with a systemic approach, considering several aspects simultaneously such as climate conditions, lighting conditions, solar radiation, PV potential, etc. The solar access analysis is an already known method to analyze the solar potentials, but in current years, simulation tools have provided more effective opportunities to perform this type of analysis, in particular in the early design stage. Nowadays, the study of the solar access is related to the easiness of the use of simulation tools, in rapid and easy way, during the design process. This study presents a comparison of three simulation tools, from the point of view of the user, with the aim to highlight differences in the easy-to-use of these tools. Using a real urban context as case study, three tools; Ecotect, Townscope and Heliodon, are tested, performing models and simulations and examining the capabilities and output results of solar access analysis. The evaluation of the ease-to-use of these tools is based on some detected parameters and features, such as the types of simulation, requirements of input data, types of results, etc. As a result, a framework is provided in which features and capabilities of each tool are shown. This framework shows the differences among these tools about functions, features and capabilities. The aim of this study is to support users and to improve the integration of simulation tools for solar access with the design process.Keywords: energy building design tools, solar access analysis, solar potential, urban planning
Procedia PDF Downloads 3402427 Determines of Professional Competencies among Newly Registered Nurses in Teaching Hospital in Kingdom of Saudi Arabia
Authors: Rana Alkattan
Abstract:
Aim: This study aims to identify and analyze the factors predicting the professional clinical competency among newly recruited registered nurses. In addition, it aims to explore factors significantly correlated with high and low professional clinical competency score. Method: A descriptive analytical is applied in this study, cross-sectional which conducted between June 2012 and June 2013 at King Abdulaziz University Hospital, as one of the largest governmental university tertiary Hospital in Saudi Arabia. A survey questionnaire was designed to collect data. And then, data were analyzed using the SPSS. Results: A total of the 86 nurses provided valid responses. 69 were female and 17 were male. The majority of the participants in this study were married, from the Philippines, between 20-29 years old. The majority had certified university bachelor’s degree in nursing, as well as had prior experience in nursing between 1 to 5 years. There are two categories emerged from the data, which significantly correlated with nurses' professional competence and development. The first was the newly employed registered nurses demographic characteristic (correlation coefficients 0.154 to 0.470, P < 0.05), while the second was the list of studied environmental factors except 'job rotation factor' (correlation coefficients 0.122 to 0.540, P < 0.01). However, nurses' attitude including motivation and confidence were not associated with nurse's professional competency. Conclusion: that nurses' professional competence development is a process affected by certain personal demographic and environmental factors which will enable newly graduates nurses to provide safe effective patients' care and maintain their career responsibilities.Keywords: clinical, competence, development nurses professional, registered
Procedia PDF Downloads 3552426 Barriers to Public Innovation in Colombia: Case Study in Central Administrative Region
Authors: Yessenia Parrado, Ana Barbosa, Daniela Mahe, Sebastian Toro, Jhon Garcia
Abstract:
Public innovation has gained strength in recent years in response to the need to find new strategies or mechanisms to interact between government entities and citizens. In this way, the Colombian government has been promoting policies aimed at strengthening innovation as a fundamental aspect in the work of public entities. However, in order to potentiate the capacities of public servants and therefore of the institutions and organizations to which they belong, it is necessary to be able to understand the context under which they operate in their daily work. This article aims to compile the work developed by the laboratory of innovation, creativity, and new technologies LAB101 of the National University of Colombia for the National Department of Planning. A case study was developed in the central region of Colombia made up of five departments, through the construction of instruments based on quantitative techniques in response to the item combined with qualitative analysis through semi-structured interviews to understand the perception of possible barriers to innovation and the obstacles that have prevented the acceleration of transformation within public organizations. From the information collected, different analyzes are carried out that allows a more robust explanation to be given to the results obtained, and a set of categories are established to group different characteristics associated with possible difficulties that officials perceive to innovate and that are later conceived as barriers. Finally, a proposal for an indicator was built to measure the degree of innovation within public entities in order to be able to carry a metric in future opportunities. The main findings of this study show three key components to be strengthened in public entities and organizations: governance, knowledge management, and the promotion of collaborative workspaces.Keywords: barriers, enablers, management, public innovation
Procedia PDF Downloads 1142425 Associations Between Positive Body Image, Physical Activity and Dietary Habits in Young Adults
Authors: Samrah Saeed
Abstract:
Introduction: This study considers a measure of positive body image and the associations between body appreciation, beauty ideals internalization, dietary habits, and physical activity in young adults. Positive body image is assessed by Body Appreciation Scale 2. It is used to assess a person's acceptance of the body, the degree of positivity, and respect for the body.Regular physical activity and healthy eating arebasically important for the body, and they play an important role in creating a positive image of the body. Objectives: To identify the associations between body appreciation and beauty ideals internalization. To compare body appreciation and body ideals internalization among students of different physical activity. To explore the associations between dietary habits (unhealthy, healthy), body appreciation and body ideals internalization. Research methods and organization: Study participants were young adult students, aged 18-35, both male and female.The research questionnaire consisted of four areas: body appreciation, beauty ideals internalization, dietary habits, and physical activity.The questionnaire was created in Google Forms online survey platform.The questionnaire was filled out anonymously Result and Discussion: Physical dissatisfaction, diet, eating disorders and exercise disorders are found in young adults all over the world.Thorough nutrition helps people understand who they are by reassuring them that they are okay without judging or accepting themselves. Social media can positively influence body image in many ways.A healthy body image is important because it affect self-esteem, self-acceptance, and your attitude towards food and exercise.Keywords: pysical activity, dietary habits, body image, beauty ideals internalization, body appreciation
Procedia PDF Downloads 972424 Feasibility Study of MongoDB and Radio Frequency Identification Technology in Asset Tracking System
Authors: Mohd Noah A. Rahman, Afzaal H. Seyal, Sharul T. Tajuddin, Hartiny Md Azmi
Abstract:
Taking into consideration the real time situation specifically the higher academic institutions, small, medium to large companies, public to private sectors and the remaining sectors, do experience the inventory or asset shrinkages due to theft, loss or even inventory tracking errors. This happening is due to a zero or poor security systems and measures being taken and implemented in their organizations. Henceforth, implementing the Radio Frequency Identification (RFID) technology into any manual or existing web-based system or web application can simply deter and will eventually solve certain major issues to serve better data retrieval and data access. Having said, this manual or existing system can be enhanced into a mobile-based system or application. In addition to that, the availability of internet connections can aid better services of the system. Such involvement of various technologies resulting various privileges to individuals or organizations in terms of accessibility, availability, mobility, efficiency, effectiveness, real-time information and also security. This paper will look deeper into the integration of mobile devices with RFID technologies with the purpose of asset tracking and control. Next, it is to be followed by the development and utilization of MongoDB as the main database to store data and its association with RFID technology. Finally, the development of a web based system which can be viewed in a mobile based formation with the aid of Hypertext Preprocessor (PHP), MongoDB, Hyper-Text Markup Language 5 (HTML5), Android, JavaScript and AJAX programming language.Keywords: RFID, asset tracking system, MongoDB, NoSQL
Procedia PDF Downloads 3062423 Assessment of Risk Factors in Residential Areas of Bosso in Minna, Nigeria
Authors: Junaid Asimiyu Mohammed, Olakunle Docas Tosin
Abstract:
The housing environment in many developing countries is fraught with risks that have potential negative impacts on the lives of the residents. The study examined the risk factors in residential areas of two neighborhoods in Bosso Local Government Areas of Minna in Nigeria with a view to determining the level of their potential impacts. A sample of 378 households was drawn from the estimated population of 22,751 household heads. The questionnaire and direct observation were used as instruments for data collection. The data collected were analyzed using the Relative Importance Index (RII) rule to determine the level of the potential impact of the risk factors while ArcGIS was used for mapping the spatial distribution of the risks. The study established that the housing environment of Angwan Biri and El-Waziri areas of Bosso is poor and vulnerable as 26% of the houses were not habitable and 57% were only fairly habitable. The risks of epidemics, building collapse and rainstorms were evident in the area as 53% of the houses had poor ventilation; 20% of residents had no access to toilets; 47% practiced open waste dumping; 46% of the houses had cracked walls while 52% of the roofs were weak and sagging. The results of the analysis of the potential impact of the risk factors indicate a RII score of 0.528 for building collapse, 0.758 for rainstorms and 0.830 for epidemics, indicating a moderate to very high level of potential impacts. The mean RII score of 0.639 shows a significant potential impact of the risk factors. The study recommends the implementation of sanitation measures, provision of basic urban facilities and neighborhood revitalization through housing infrastructure retrofitting as measures to mitigate the risks of disasters and improve the living conditions of the residents of the study area.Keywords: assessment, risk, residential, Nigeria
Procedia PDF Downloads 572422 Integrating Explicit Instruction and Problem-Solving Approaches for Efficient Learning
Authors: Slava Kalyuga
Abstract:
There are two opposing major points of view on the optimal degree of initial instructional guidance that is usually discussed in the literature by the advocates of the corresponding learning approaches. Using unguided or minimally guided problem-solving tasks prior to explicit instruction has been suggested by productive failure and several other instructional theories, whereas an alternative approach - using fully guided worked examples followed by problem solving - has been demonstrated as the most effective strategy within the framework of cognitive load theory. An integrated approach discussed in this paper could combine the above frameworks within a broader theoretical perspective which would allow bringing together their best features and advantages in the design of learning tasks for STEM education. This paper represents a systematic review of the available empirical studies comparing the above alternative sequences of instructional methods to explore effects of several possible moderating factors. The paper concludes that different approaches and instructional sequences should coexist within complex learning environments. Selecting optimal sequences depends on such factors as specific goals of learner activities, types of knowledge to learn, levels of element interactivity (task complexity), and levels of learner prior knowledge. This paper offers an outline of a theoretical framework for the design of complex learning tasks in STEM education that would integrate explicit instruction and inquiry (exploratory, discovery) learning approaches in ways that depend on a set of defined specific factors.Keywords: cognitive load, explicit instruction, exploratory learning, worked examples
Procedia PDF Downloads 1262421 Randomized Controlled Study of the Antipyretic Efficacy of Oral Paracetamol, Intravenous Paracetamol, and Intramuscular Diclofenac
Authors: Firjeeth C. Paramba, Vamanjore A. Naushad, Nishan K. Purayil, Osama H. Mohammed, Prem Chandra
Abstract:
Background: Fever is a common problem in adults visiting the emergency department. Extensive studies have been done in children comparing the efficacy of various antipyretics. However, studies on the efficacy of antipyretic drugs in adults are very scarce. To the best of our knowledge, no controlled trial has been carried out comparing the antipyretic efficacy of paracetamol (oral and intravenous) and intramuscular diclofenac in adults. Methods: In this parallel-group, open-label trial, participants aged 14–75 years presenting with fever who had a temperature of more than 38.5°C were enrolled and treated. Participants were randomly allocated to receive treatment with 1,000 mg oral paracetamol (n=145), 1,000 mg intravenous paracetamol (n=139), or 75 mg intramuscular diclofenac (n=150). The primary outcome was degree of reduction in mean oral temperature at 90 minutes. The efficacy of diclofenac versus oral and intravenous paracetamol was assessed by superiority comparison. Analysis was done using intention to treat principles. Results: After 90 minutes, all three groups showed a significant reduction in mean temperature, with intramuscular diclofenac showing the greatest reduction (−1.44 ± 0.43, 95% confidence interval [CI] −1.4 to −2.5) and oral paracetamol the least (−1.08 ± 0.51, 95% CI −0.99 to −2.2). After 120 minutes, there was a significant difference observed in the mean change from baseline temperature between the three treatment groups (P, 0.0001). Significant changes in temperature were observed in favor of intramuscular diclofenac over oral and intravenous paracetamol at each time point from 60 minutes through 120 minutes inclusive. Conclusion: Both intramuscular diclofenac and intravenous paracetamol showed superior antipyretic activity than oral paracetamol. However, in view of its ease of administration, intramuscular diclofenac can be used as a first-choice antipyretic in febrile adults in the emergency department.Keywords: antipyretic, intramuscular, intravenous, paracetamol, diclofenac, emergency department
Procedia PDF Downloads 3722420 Modeling Average Paths Traveled by Ferry Vessels Using AIS Data
Authors: Devin Simmons
Abstract:
At the USDOT’s Bureau of Transportation Statistics, a biannual census of ferry operators in the U.S. is conducted, with results such as route mileage used to determine federal funding levels for operators. AIS data allows for the possibility of using GIS software and geographical methods to confirm operator-reported mileage for individual ferry routes. As part of the USDOT’s work on the ferry census, an algorithm was developed that uses AIS data for ferry vessels in conjunction with known ferry terminal locations to model the average route travelled for use as both a cartographic product and confirmation of operator-reported mileage. AIS data from each vessel is first analyzed to determine individual journeys based on the vessel’s velocity, and changes in velocity over time. These trips are then converted to geographic linestring objects. Using the terminal locations, the algorithm then determines whether the trip represented a known ferry route. Given a large enough dataset, routes will be represented by multiple trip linestrings, which are then filtered by DBSCAN spatial clustering to remove outliers. Finally, these remaining trips are ready to be averaged into one route. The algorithm interpolates the point on each trip linestring that represents the start point. From these start points, a centroid is calculated, and the first point of the average route is determined. Each trip is interpolated again to find the point that represents one percent of the journey’s completion, and the centroid of those points is used as the next point in the average route, and so on until 100 points have been calculated. Routes created using this algorithm have shown demonstrable improvement over previous methods, which included the implementation of a LOESS model. Additionally, the algorithm greatly reduces the amount of manual digitizing needed to visualize ferry activity.Keywords: ferry vessels, transportation, modeling, AIS data
Procedia PDF Downloads 1762419 Osteogenesis in Thermo-Sensitive Hydrogel Using Mesenchymal Stem Cell Derived from Human Turbinate
Authors: A. Reum Son, Jin Seon Kwon, Seung Hun Park, Hai Bang Lee, Moon Suk Kim
Abstract:
These days, stem cell therapy is focused on for promising source of treatment in clinical human disease. As a supporter of stem cells, in situ-forming hydrogels with growth factors and cells appear to be a promising approach in tissue engineering. To examine osteogenic differentiation of hTMSCs which is one of mesenchymal stem cells in vivo in an injectable hydrogel, we use a methoxy polyethylene glycol-polycaprolactone blockcopolymer (MPEG-PCL) solution with osteogenic factors. We synthesized MPEG-PCL hydrogel and measured viscosity to check sol-gel transition. In order to demonstrate osteogenic ability of hTMSCs, we conducted in vitro osteogenesis experiment. Then, to confirm the cell cytotoxicity, we performed WST-1 with hTMSCs and MPEG-PCL. As the result of in vitro experiment, we implanted cell and hydrogel mixture into animal model and checked degree of osteogenesis with histological analysis and amount of expression genes. Through these experimental data, MPEG-PCL hydrogel has sol-gel transition in temperature change and is biocompatible with stem cells. In histological analysis and gene expression, hTMSCs are very good source of osteogenesis with hydrogel and will use it to tissue engineering as important treatment method. hTMSCs could be a good adult stem cell source for usability of isolation and high proliferation. When hTMSCs are used as cell therapy method with in situ-formed hydrogel, they may provide various benefits like a noninvasive alternative for bone tissue engineering applications.Keywords: injectable hydrogel, stem cell, osteogenic differentiation, tissue engineering
Procedia PDF Downloads 4472418 Combining Diffusion Maps and Diffusion Models for Enhanced Data Analysis
Authors: Meng Su
Abstract:
High-dimensional data analysis often presents challenges in capturing the complex, nonlinear relationships and manifold structures inherent to the data. This article presents a novel approach that leverages the strengths of two powerful techniques, Diffusion Maps and Diffusion Probabilistic Models (DPMs), to address these challenges. By integrating the dimensionality reduction capability of Diffusion Maps with the data modeling ability of DPMs, the proposed method aims to provide a comprehensive solution for analyzing and generating high-dimensional data. The Diffusion Map technique preserves the nonlinear relationships and manifold structure of the data by mapping it to a lower-dimensional space using the eigenvectors of the graph Laplacian matrix. Meanwhile, DPMs capture the dependencies within the data, enabling effective modeling and generation of new data points in the low-dimensional space. The generated data points can then be mapped back to the original high-dimensional space, ensuring consistency with the underlying manifold structure. Through a detailed example implementation, the article demonstrates the potential of the proposed hybrid approach to achieve more accurate and effective modeling and generation of complex, high-dimensional data. Furthermore, it discusses possible applications in various domains, such as image synthesis, time-series forecasting, and anomaly detection, and outlines future research directions for enhancing the scalability, performance, and integration with other machine learning techniques. By combining the strengths of Diffusion Maps and DPMs, this work paves the way for more advanced and robust data analysis methods.Keywords: diffusion maps, diffusion probabilistic models (DPMs), manifold learning, high-dimensional data analysis
Procedia PDF Downloads 1082417 Application of Metric Dimension of Graph in Unraveling the Complexity of Hyperacusis
Authors: Hassan Ibrahim
Abstract:
The prevalence of hyperacusis, an auditory condition characterized by heightened sensitivity to sounds, continues to rise, posing challenges for effective diagnosis and intervention. It is believed that this work deepens will deepens the understanding of hyperacusis etiology by employing graph theory as a novel analytical framework. We constructed a comprehensive graph wherein nodes represent various factors associated with hyperacusis, including aging, head or neck trauma, infection/virus, depression, migraines, ear infection, anxiety, and other potential contributors. Relationships between factors are modeled as edges, allowing us to visualize and quantify the interactions within the etiological landscape of hyperacusis. it employ the concept of the metric dimension of a connected graph to identify key nodes (landmarks) that serve as critical influencers in the interconnected web of hyperacusis causes. This approach offers a unique perspective on the relative importance and centrality of different factors, shedding light on the complex interplay between physiological, psychological, and environmental determinants. Visualization techniques were also employed to enhance the interpretation and facilitate the identification of the central nodes. This research contributes to the growing body of knowledge surrounding hyperacusis by offering a network-centric perspective on its multifaceted causes. The outcomes hold the potential to inform clinical practices, guiding healthcare professionals in prioritizing interventions and personalized treatment plans based on the identified landmarks within the etiological network. Through the integration of graph theory into hyperacusis research, the complexity of this auditory condition was unraveled and pave the way for more effective approaches to its management.Keywords: auditory condition, connected graph, hyperacusis, metric dimension
Procedia PDF Downloads 382416 The Cloud Systems Used in Education: Properties and Overview
Authors: Agah Tuğrul Korucu, Handan Atun
Abstract:
Diversity and usefulness of information that used in education are have increased due to development of technology. Web technologies have made enormous contributions to the distance learning system especially. Mobile systems, one of the most widely used technology in distance education, made much easier to access web technologies. Not bounding by space and time, individuals have had the opportunity to access the information on web. In addition to this, the storage of educational information and resources and accessing these information and resources is crucial for both students and teachers. Because of this importance, development and dissemination of web technologies supply ease of access to information and resources are provided by web technologies. Dynamic web technologies introduced as new technologies that enable sharing and reuse of information, resource or applications via the Internet and bring websites into expandable platforms are commonly known as Web 2.0 technologies. Cloud systems are one of the dynamic web technologies that defined as a model provides approaching the demanded information independent from time and space in appropriate circumstances and developed by NIST. One of the most important advantages of cloud systems is meeting the requirements of users directly on the web regardless of hardware, software, and dealing with install. Hence, this study aims at using cloud services in education and investigating the services provided by the cloud computing. Survey method has been used as research method. In the findings of this research the fact that cloud systems are used such studies as resource sharing, collaborative work, assignment submission and feedback, developing project in the field of education, and also, it is revealed that cloud systems have plenty of significant advantages in terms of facilitating teaching activities and the interaction between teacher, student and environment.Keywords: cloud systems, cloud systems in education, online learning environment, integration of information technologies, e-learning, distance learning
Procedia PDF Downloads 3492415 The Precarious Chinese Ecology of Financial Expertise: Discontent in the Mix
Authors: Giulia Dal Maso
Abstract:
Within the contemporary financial capitalist configuration, the interplay of Chinese statecraft and financialization has shaped a new ‘ecology of financial expertise.’ This indicates the emergence of a new financial technocratic governance; that is increasingly changing the Chinese economy, reducing the state’s administrative and fiscal functions and increasing state assets in accordance with a new shareholder logic. In this shift, the creation of the stock market by the state was conceived not only as a new redistributor of wealth but as a ‘clearing house’ for social discontent resulting from work casualization, wage repression and a lack of social welfare. Since its inception in the wake of Deng Xiaoping’s reforms, the Chinese state has used the stock market as a means of securing social legitimation by providing a prearranged space where the disaggregated and vulnerable subjects left behind by the dismantlement of the collective work units of the Maoist period (danwei) can congregate. However, fieldwork which included both participant observation as well as interviews with investors in brokerage rooms in Shanghai (where one of only two mainland Chinese stock exchanges is situated) reveals that both new formal and informal financial experts—namely the haigui (Chinese returnees with a financial degree abroad) and sanhu (individual Chinese scattered players), are equally dissatisfied with their investing activities. They express discontent with the state, which they hold responsible for the summer 2015 financial crisis and for the financial turmoil that jeopardizes China’s financial and political project. What the investors want is a state that will guarantee the continuation of the current gupiaore ‘stock fever’. This paper holds that, by embracing financialization, the state is undermining the contract at the base of its legitimacy.Keywords: Chinese state, Deng Xiaoping, financial capitalism, individual investors
Procedia PDF Downloads 456