Search results for: and teachers' interaction approaches
199 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection
Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy
Abstract:
Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks
Procedia PDF Downloads 74198 Wear Resistance in Dry and Lubricated Conditions of Hard-anodized EN AW-4006 Aluminum Alloy
Authors: C. Soffritti, A. Fortini, E. Baroni, M. Merlin, G. L. Garagnani
Abstract:
Aluminum alloys are widely used in many engineering applications due to their advantages such ashigh electrical and thermal conductivities, low density, high strength to weight ratio, and good corrosion resistance. However, their low hardness and poor tribological properties still limit their use in industrial fields requiring sliding contacts. Hard anodizing is one of the most common solution for overcoming issues concerning the insufficient friction resistance of aluminum alloys. In this work, the tribological behavior ofhard-anodized AW-4006 aluminum alloys in dry and lubricated conditions was evaluated. Three different hard-anodizing treatments were selected: a conventional one (HA) and two innovative golden hard-anodizing treatments (named G and GP, respectively), which involve the sealing of the porosity of anodic aluminum oxides (AAO) with silver ions at different temperatures. Before wear tests, all AAO layers were characterized by scanning electron microscopy (VPSEM/EDS), X-ray diffractometry, roughness (Ra and Rz), microhardness (HV0.01), nanoindentation, and scratch tests. Wear tests were carried out according to the ASTM G99-17 standard using a ball-on-disc tribometer. The tests were performed in triplicate under a 2 Hz constant frequency oscillatory motion, a maximum linear speed of 0.1 m/s, normal loads of 5, 10, and 15 N, and a sliding distance of 200 m. A 100Cr6 steel ball10 mm in diameter was used as counterpart material. All tests were conducted at room temperature, in dry and lubricated conditions. Considering the more recent regulations about the environmental hazard, four bio-lubricants were considered after assessing their chemical composition (in terms of Unsaturation Number, UN) and viscosity: olive, peanut, sunflower, and soybean oils. The friction coefficient was provided by the equipment. The wear rate of anodized surfaces was evaluated by measuring the cross-section area of the wear track with a non-contact 3D profilometer. Each area value, obtained as an average of four measurements of cross-section areas along the track, was used to determine the wear volume. The worn surfaces were analyzed by VPSEM/EDS. Finally, in agreement with DoE methodology, a statistical analysis was carried out to identify the most influencing factors on the friction coefficients and wear rates. In all conditions, results show that the friction coefficient increased with raising the normal load. Considering the wear tests in dry sliding conditions, irrespective of the type of anodizing treatments, metal transfer between the mating materials was observed over the anodic aluminum oxides. During sliding at higher loads, the detachment of the metallic film also caused the delamination of some regions of the wear track. For the wear tests in lubricated conditions, the natural oils with high percentages of oleic acid (i.e., olive and peanut oils) maintained high friction coefficients and low wear rates. Irrespective of the type of oil, smallmicrocraks were visible over the AAO layers. Based on the statistical analysis, the type of anodizing treatment and magnitude of applied load were the main factors of influence on the friction coefficient and wear rate values. Nevertheless, an interaction between bio-lubricants and load magnitude could occur during the tests.Keywords: hard anodizing treatment, silver ions, bio-lubricants, sliding wear, statistical analysis
Procedia PDF Downloads 150197 Comparison of On-Site Stormwater Detention Policies in Australian and Brazilian Cities
Authors: Pedro P. Drumond, James E. Ball, Priscilla M. Moura, Márcia M. L. P. Coelho
Abstract:
In recent decades, On-site Stormwater Detention (OSD) systems have been implemented in many cities around the world. In Brazil, urban drainage source control policies were created in the 1990’s and were mainly based on OSD. The concept of this technique is to promote the detention of additional stormwater runoff caused by impervious areas, in order to maintain pre-urbanization peak flow levels. In Australia OSD, was first adopted in the early 1980’s by the Ku-ring-gai Council in Sydney’s northern suburbs and Wollongong City Council. Many papers on the topic were published at that time. However, source control techniques related to stormwater quality have become to the forefront and OSD has been relegated to the background. In order to evaluate the effectiveness of the current regulations regarding OSD, the existing policies were compared in Australian cities, a country considered experienced in the use of this technique, and in Brazilian cities where OSD adoption has been increasing. The cities selected for analysis were Wollongong and Belo Horizonte, the first municipalities to adopt OSD in their respective countries, and Sydney and Porto Alegre, cities where these policies are local references. The Australian and Brazilian cities are located in Southern Hemisphere of the planet and similar rainfall intensities can be observed, especially in storm bursts greater than 15 minutes. Regarding technical criteria, Brazilian cities have a site-based approach, analyzing only on-site system drainage. This approach is criticized for not evaluating impacts on urban drainage systems and in rare cases may cause the increase of peak flows downstream. The city of Wollongong and most of the Sydney Councils adopted a catchment-based approach, requiring the use of Permissible Site Discharge (PSD) and Site Storage Requirements (SSR) values based on analysis of entire catchments via hydrograph-producing computer models. Based on the premise that OSD should be designed to dampen storms of 100 years Average Recurrence Interval (ARI) storm, the values of PSD and SSR in these four municipalities were compared. In general, Brazilian cities presented low values of PSD and high values of SSR. This can be explained by site-based approach and the low runoff coefficient value adopted for pre-development conditions. The results clearly show the differences between approaches and methodologies adopted in OSD designs among Brazilian and Australian municipalities, especially with regard to PSD values, being on opposite sides of the scale. However, lack of research regarding the real performance of constructed OSD does not allow for determining which is best. It is necessary to investigate OSD performance in a real situation, assessing the damping provided throughout its useful life, maintenance issues, debris blockage problems and the parameters related to rain-flow methods. Acknowledgments: The authors wish to thank CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico (Chamada Universal – MCTI/CNPq Nº 14/2014), FAPEMIG - Fundação de Amparo à Pesquisa do Estado de Minas Gerais, and CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior for their financial support.Keywords: on-site stormwater detention, source control, stormwater, urban drainage
Procedia PDF Downloads 180196 Improved Anatomy Teaching by the 3D Slicer Platform
Authors: Ahmedou Moulaye Idriss, Yahya Tfeil
Abstract:
Medical imaging technology has become an indispensable tool in many branches of the biomedical, health area, and research and is vitally important for the training of professionals in these fields. It is not only about the tools, technologies, and knowledge provided but also about the community that this training project proposes. In order to be able to raise the level of anatomy teaching in the medical school of Nouakchott in Mauritania, it is necessary and even urgent to facilitate access to modern technology for African countries. The role of technology as a key driver of justifiable development has long been recognized. Anatomy is an essential discipline for the training of medical students; it is a key element for the training of medical specialists. The quality and results of the work of a young surgeon depend on his better knowledge of anatomical structures. The teaching of anatomy is difficult as the discipline is being neglected by medical students in many academic institutions. However, anatomy remains a vital part of any medical education program. When anatomy is presented in various planes medical students approve of difficulties in understanding. They do not increase their ability to visualize and mentally manipulate 3D structures. They are sometimes not able to correctly identify neighbouring or associated structures. This is the case when they have to make the identification of structures related to the caudate lobe when the liver is moved to different positions. In recent decades, some modern educational tools using digital sources tend to replace old methods. One of the main reasons for this change is the lack of cadavers in laboratories with poorly qualified staff. The emergence of increasingly sophisticated mathematical models, image processing, and visualization tools in biomedical imaging research have enabled sophisticated three-dimensional (3D) representations of anatomical structures. In this paper, we report our current experience in the Faculty of Medicine in Nouakchott Mauritania. One of our main aims is to create a local learning community in the fields of anatomy. The main technological platform used in this project is called 3D Slicer. 3D Slicer platform is an open-source application available for free for viewing, analysis, and interaction with biomedical imaging data. Using the 3D Slicer platform, we created from real medical images anatomical atlases of parts of the human body, including head, thorax, abdomen, liver, and pelvis, upper and lower limbs. Data were collected from several local hospitals and also from the website. We used MRI and CT-Scan imaging data from children and adults. Many different anatomy atlases exist, both in print and digital forms. Anatomy Atlas displays three-dimensional anatomical models, image cross-sections of labelled structures and source radiological imaging, and a text-based hierarchy of structures. Open and free online anatomical atlases developed by our anatomy laboratory team will be available to our students. This will allow pedagogical autonomy and remedy the shortcomings by responding more fully to the objectives of sustainable local development of quality education and good health at the national level. To make this work a reality, our team produced several atlases available in our faculty in the form of research projects.Keywords: anatomy, education, medical imaging, three dimensional
Procedia PDF Downloads 241195 Photobleaching Kinetics and Epithelial Distribution of Hexylaminoleuilinate Induced PpIX in Rat Bladder Cancer
Authors: Sami El Khatib, Agnès Leroux, Jean-Louis Merlin, François Guillemin, Marie-Ange D’Hallewin
Abstract:
Photodynamic therapy (PDT) is a treatment modality based on the cytotoxic effect occurring on the target tissues by interaction of a photosensitizer with light in the presence of oxygen. One of the major advances in PDT can be attributed to the use of topical aminolevulinic (ALA) to induce Protoporphyrin IX (PpIX) for the treatment of early stage cancers as well as diagnosis. ALA is a precursor of the heme synthesis pathway. Locally delivered to the target tissue ALA overcomes the negative feedback exerted by heme and promotes the transient formation of PpIX in situ to reach critical effective levels in cells and tissue. Whereas early steps of the heme pathway occur in the cytosol, PpIX synthesis is shown to be held in the mitochondrial membranes and PpIX fluorescence is expected to accumulate in close vicinity of the initial building site and to progressively diffuse to the neighboring cytoplasmic compartment or other lipophylic organelles. PpIX is known to be highly reactive and will be degraded when irradiated with light. PpIX photobleaching is believed to be governed by a singlet oxygen mediated mechanism in the presence of oxidized amino acids and proteins. PpIX photobleaching and subsequent spectral phototransformation were described widely in tumor cells incubated in vitro with ALA solution, or ex vivo in human and porcine mucosa superfused with hexylaminolevulinate (hALA). PpIX photobleaching was also studied in vivo, using animal models such as normal or tumor mice skin and orthotopic rat bladder model. Hexyl aminolevulinate a more potent lipophilic derivative of ALA was proposed as an adjunct to standard cystoscopy in the fluorescence diagnosis of bladder cancer and other malignancies. We have previously reported the effectiveness of hALA mediated PDT of rat bladder cancer. Although normal and tumor bladder epithelium exhibit similar fluorescence intensities after intravesical instillation of two hALA concentrations (8 and 16 mM), the therapeutic response at 8mM and 20J/cm2 was completely different from the one observed at 16mM irradiated with the same light dose. Where the tumor is destroyed, leaving the underlying submucosa and muscle intact after an 8 mM instillation, 16mM sensitization and subsequent illumination results in the complete destruction of the underlying bladder wall but leaves the tumor undamaged. The object of the current study is to try to unravel the underlying mechanism for this apparent contradiction. PpIX extraction showed identical amounts of photosensitizer in tumor bearing bladders at both concentrations. Photobleaching experiments revealed mono-exponential decay curves in both situations but with a two times faster decay constant in case of 16mM bladders. Fluorescence microscopy shows an identical fluorescence pattern for normal bladders at both concentrations and tumor bladders at 8mM with bright spots. Tumor bladders at 16 mM exhibit a more diffuse cytoplasmic fluorescence distribution. The different response to PDT with regard to the initial pro-drug concentration can thus be attributed to the different cellular localization.Keywords: bladder cancer, hexyl-aminolevulinate, photobleaching, confocal fluorescence microscopy
Procedia PDF Downloads 407194 Mapping Context, Roles, and Relations for Adjudicating Robot Ethics
Authors: Adam J. Bowen
Abstract:
Abstract— Should robots have rights or legal protections. Often debates concerning whether robots and AI should be afforded rights focus on conditions of personhood and the possibility of future advanced forms of AI satisfying particular intrinsic cognitive and moral attributes of rights-holding persons. Such discussions raise compelling questions about machine consciousness, autonomy, and value alignment with human interests. Although these are important theoretical concerns, especially from a future design perspective, they provide limited guidance for addressing the moral and legal standing of current and near-term AI that operate well below the cognitive and moral agency of human persons. Robots and AI are already being pressed into service in a wide range of roles, especially in healthcare and biomedical contexts. The design and large-scale implementation of robots in the context of core societal institutions like healthcare systems continues to rapidly develop. For example, we bring them into our homes, hospitals, and other care facilities to assist in care for the sick, disabled, elderly, children, or otherwise vulnerable persons. We enlist surgical robotic systems in precision tasks, albeit still human-in-the-loop technology controlled by surgeons. We also entrust them with social roles involving companionship and even assisting in intimate caregiving tasks (e.g., bathing, feeding, turning, medicine administration, monitoring, transporting). There have been advances to enable severely disabled persons to use robots to feed themselves or pilot robot avatars to work in service industries. As the applications for near-term AI increase and the roles of robots in restructuring our biomedical practices expand, we face pressing questions about the normative implications of human-robot interactions and collaborations in our collective worldmaking, as well as the moral and legal status of robots. This paper argues that robots operating in public and private spaces be afforded some protections as either moral patients or legal agents to establish prohibitions on robot abuse, misuse, and mistreatment. We already implement robots and embed them in our practices and institutions, which generates a host of human-to-machine and machine-to-machine relationships. As we interact with machines, whether in service contexts, medical assistance, or home health companions, these robots are first encountered in relationship to us and our respective roles in the encounter (e.g., surgeon, physical or occupational therapist, recipient of care, patient’s family, healthcare professional, stakeholder). This proposal aims to outline a framework for establishing limiting factors and determining the extent of moral or legal protections for robots. In doing so, it advocates for a relational approach that emphasizes the priority of mapping the complex contextually sensitive roles played and the relations in which humans and robots stand to guide policy determinations by relevant institutions and authorities. The relational approach must also be technically informed by the intended uses of the biomedical technologies in question, Design History Files, extensive risk assessments and hazard analyses, as well as use case social impact assessments.Keywords: biomedical robots, robot ethics, robot laws, human-robot interaction
Procedia PDF Downloads 120193 A Comprehensive Survey of Artificial Intelligence and Machine Learning Approaches across Distinct Phases of Wildland Fire Management
Authors: Ursula Das, Manavjit Singh Dhindsa, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran
Abstract:
Wildland fires, also known as forest fires or wildfires, are exhibiting an alarming surge in frequency in recent times, further adding to its perennial global concern. Forest fires often lead to devastating consequences ranging from loss of healthy forest foliage and wildlife to substantial economic losses and the tragic loss of human lives. Despite the existence of substantial literature on the detection of active forest fires, numerous potential research avenues in forest fire management, such as preventative measures and ancillary effects of forest fires, remain largely underexplored. This paper undertakes a systematic review of these underexplored areas in forest fire research, meticulously categorizing them into distinct phases, namely pre-fire, during-fire, and post-fire stages. The pre-fire phase encompasses the assessment of fire risk, analysis of fuel properties, and other activities aimed at preventing or reducing the risk of forest fires. The during-fire phase includes activities aimed at reducing the impact of active forest fires, such as the detection and localization of active fires, optimization of wildfire suppression methods, and prediction of the behavior of active fires. The post-fire phase involves analyzing the impact of forest fires on various aspects, such as the extent of damage in forest areas, post-fire regeneration of forests, impact on wildlife, economic losses, and health impacts from byproducts produced during burning. A comprehensive understanding of the three stages is imperative for effective forest fire management and mitigation of the impact of forest fires on both ecological systems and human well-being. Artificial intelligence and machine learning (AI/ML) methods have garnered much attention in the cyber-physical systems domain in recent times leading to their adoption in decision-making in diverse applications including disaster management. This paper explores the current state of AI/ML applications for managing the activities in the aforementioned phases of forest fire. While conventional machine learning and deep learning methods have been extensively explored for the prevention, detection, and management of forest fires, a systematic classification of these methods into distinct AI research domains is conspicuously absent. This paper gives a comprehensive overview of the state of forest fire research across more recent and prominent AI/ML disciplines, including big data, classical machine learning, computer vision, explainable AI, generative AI, natural language processing, optimization algorithms, and time series forecasting. By providing a detailed overview of the potential areas of research and identifying the diverse ways AI/ML can be employed in forest fire research, this paper aims to serve as a roadmap for future investigations in this domain.Keywords: artificial intelligence, computer vision, deep learning, during-fire activities, forest fire management, machine learning, pre-fire activities, post-fire activities
Procedia PDF Downloads 72192 Thai Cane Farmers' Responses to Sugar Policy Reforms: An Intentions Survey
Authors: Savita Tangwongkit, Chittur S Srinivasan, Philip J. Jones
Abstract:
Thailand has become the world’s fourth largest sugarcane producer and second largest sugar exporter. While there have been a number of drivers of this growth, the primary driver has been wide-ranging government support measures. Recently, the Thai government has emphasized the need for policy reform as part of a broader industry restructuring to bring the sector up-to-date with the current and future developments in the international sugar market. Because of the sectors historical dependence on government support, any such reform is likely to have a very significant impact on the fortunes of Thai cane farmers. This study explores the impact of three policy scenarios, representing a spectrum of policy approaches, on Thai cane producers. These reform scenarios were designed in consultation with policy makers and academics working in the cane sector. Scenario 1 captures the current ‘government proposal’ for policy reform. This scenario removes certain domestic production subsidies but seeks to maintain as much support as is permissible under current WTO rules. The second scenario, ‘protectionism’, maintains the current internal market producer supports, but otherwise complies with international (WTO) commitments. Third, the ‘libertarian scenario’ removes all production support and market interventions, trade and domestic consumption distortions. Most important driver of producer behaviour in all of the scenarios is the producer price of cane. Cane price is obviously highest under the protectionism scenario, followed by government proposal and libertarian scenarios, respectively. Likely producer responses to these three policy scenarios was determined by means of a large-scale survey of cane farmers. The sample was stratified by size group and quotas filled by size group and region. One scenario was presented to each of three sub-samples, consisting of approx.150 farmers. Total sample size was 462 farms. Data was collected by face-to-face interview between June and August 2019. There was a marked difference in farmer response to the three scenarios. Farmers in the ‘Protectionism’ scenario, which maintains the highest cane price and those who farm larger cane areas are more likely to continue cane farming. The libertarian scenario is likely to result in the greatest losses in terms of cane production volume broadly double that of the ‘protectionism’ scenario, primarily due to farmers quitting cane production altogether. Over half of loss cane production volume comes from medium-size farm, i.e. the largest and smallest producers are the most resilient. This result is likely due to the fact that the medium size group are large enough to require hired labour but lack the economies of scale of the largest farms. Over all size groups the farms most heavily specialized in cane production, i.e. those devoting 26-50% of arable land to cane, are also the most vulnerable, with 70% of all farmers quitting cane production coming from this group. This investigation suggests that cane price is the most significant determinant of farmer behaviour. Also, that where scenarios drive significantly lower cane price, policy makers should target support towards mid-sized producers, with policies that encourage efficiency gains and diversification into alternative agricultural crops.Keywords: farmer intentions, farm survey, policy reform, Thai cane production
Procedia PDF Downloads 110191 Modelling of Reactive Methodologies in Auto-Scaling Time-Sensitive Services With a MAPE-K Architecture
Authors: Óscar Muñoz Garrigós, José Manuel Bernabeu Aubán
Abstract:
Time-sensitive services are the base of the cloud services industry. Keeping low service saturation is essential for controlling response time. All auto-scalable services make use of reactive auto-scaling. However, reactive auto-scaling has few in-depth studies. This presentation shows a model for reactive auto-scaling methodologies with a MAPE-k architecture. Queuing theory can compute different properties of static services but lacks some parameters related to the transition between models. Our model uses queuing theory parameters to relate the transition between models. It associates MAPE-k related times, the sampling frequency, the cooldown period, the number of requests that an instance can handle per unit of time, the number of incoming requests at a time instant, and a function that describes the acceleration in the service's ability to handle more requests. This model is later used as a solution to horizontally auto-scale time-sensitive services composed of microservices, reevaluating the model’s parameters periodically to allocate resources. The solution requires limiting the acceleration of the growth in the number of incoming requests to keep a constrained response time. Business benefits determine such limits. The solution can add a dynamic number of instances and remains valid under different system sizes. The study includes performance recommendations to improve results according to the incoming load shape and business benefits. The exposed methodology is tested in a simulation. The simulator contains a load generator and a service composed of two microservices, where the frontend microservice depends on a backend microservice with a 1:1 request relation ratio. A common request takes 2.3 seconds to be computed by the service and is discarded if it takes more than 7 seconds. Both microservices contain a load balancer that assigns requests to the less loaded instance and preemptively discards requests if they are not finished in time to prevent resource saturation. When load decreases, instances with lower load are kept in the backlog where no more requests are assigned. If the load grows and an instance in the backlog is required, it returns to the running state, but if it finishes the computation of all requests and is no longer required, it is permanently deallocated. A few load patterns are required to represent the worst-case scenario for reactive systems: the following scenarios test response times, resource consumption and business costs. The first scenario is a burst-load scenario. All methodologies will discard requests if the rapidness of the burst is high enough. This scenario focuses on the number of discarded requests and the variance of the response time. The second scenario contains sudden load drops followed by bursts to observe how the methodology behaves when releasing resources that are lately required. The third scenario contains diverse growth accelerations in the number of incoming requests to observe how approaches that add a different number of instances can handle the load with less business cost. The exposed methodology is compared against a multiple threshold CPU methodology allocating/deallocating 10 or 20 instances, outperforming the competitor in all studied metrics.Keywords: reactive auto-scaling, auto-scaling, microservices, cloud computing
Procedia PDF Downloads 93190 The Governance of Net-Zero Emission Urban Bus Transitions in the United Kingdom: Insight from a Transition Visioning Stakeholder Workshop
Authors: Iraklis Argyriou
Abstract:
The transition to net-zero emission urban bus (ZEB) systems is receiving increased attention in research and policymaking throughout the globe. Most studies in this area tend to address techno-economic aspects and the perspectives of a narrow group of stakeholders, while they largely overlook analysis of current bus system dynamics. This offers limited insight into the types of ZEB governance challenges and opportunities that are encountered in real-world contexts, as well as into some of the immediate actions that need to be taken to set off the transition over the longer term. This research offers a multi-stakeholder perspective into both the technical and non-technical factors that influence ZEB transitions within a particular context, the UK. It does so by drawing from a recent transition visioning stakeholder workshop (June 2023) with key public, private and civic actors of the urban bus transportation system. Using NVivo software to qualitatively analyze the workshop discussions, the research examines the key technological and funding aspects, as well as the short-term actions (over the next five years), that need to be addressed for supporting the ZEB transition in UK cities. It finds that ZEB technology has reached a mature stage (i.e., high efficiency of batteries, motors and inverters), but important improvements can be pursued through greater control and integration of ZEB technological components and systems. In this regard, telemetry, predictive maintenance and adaptive control strategies pertinent to the performance and operation of ZEB vehicles have a key role to play in the techno-economic advancement of the transition. Yet, more pressing gaps were identified in the current ZEB funding regime. Whereas the UK central government supports greater ZEB adoption through a series of grants and subsidies, the scale of the funding and its fragmented nature do not match the needs for a UK-wide transition. Funding devolution arrangements (i.e., stable funding settlement deals between the central government and the devolved administrations/local authorities), as well as locally-driven schemes (i.e., congestion charging/workplace parking levy), could then enhance the financial prospects of the transition. As for short-term action, three areas were identified as critical: (1) the creation of whole value chains around the supply, use and recycling of ZEB components; (2) the ZEB retrofitting of existing fleets; and (3) integrated transportation that prioritizes buses as a first-choice, convenient and reliable mode while it simultaneously reduces car dependency in urban areas. Taken together, the findings point to the need for place-based transition approaches that create a viable techno-economic ecosystem for ZEB development but at the same time adopt a broader governance perspective beyond a ‘net-zero’ and ‘bus sectoral’ focus. As such, multi-actor collaborations and the coordination of wider resources and agency, both vertically across institutional scales and horizontally across transport, energy and urban planning, become fundamental features of comprehensive ZEB responses. The lessons from the UK case can inform a broader body of empirical contextual knowledge of ZEB transition governance within domestic political economies of public transportation.Keywords: net-zero emission transition, stakeholders, transition governance, UK, urban bus transportation
Procedia PDF Downloads 75189 Runoff Estimates of Rapidly Urbanizing Indian Cities: An Integrated Modeling Approach
Authors: Rupesh S. Gundewar, Kanchan C. Khare
Abstract:
Runoff contribution from urban areas is generally from manmade structures and few natural contributors. The manmade structures are buildings; roads and other paved areas whereas natural contributors are groundwater and overland flows etc. Runoff alleviation is done by manmade as well as natural storages. Manmade storages are storage tanks or other storage structures such as soakways or soak pits which are more common in western and European countries. Natural storages are catchment slope, infiltration, catchment length, channel rerouting, drainage density, depression storage etc. A literature survey on the manmade and natural storages/inflow has presented percentage contribution of each individually. Sanders et.al. in their research have reported that a vegetation canopy reduces runoff by 7% to 12%. Nassif et el in their research have reported that catchment slope has an impact of 16% on bare standard soil and 24% on grassed soil on rainfall runoff. Infiltration being a pervious/impervious ratio dependent parameter is catchment specific. But a literature survey has presented a range of 15% to 30% loss of rainfall runoff in various catchment study areas. Catchment length and channel rerouting too play a considerable role in reduction of rainfall runoff. Ground infiltration inflow adds to the runoff where the groundwater table is very shallow and soil saturates even in a lower intensity storm. An approximate percent contribution through this inflow and surface inflow contributes to about 2% of total runoff volume. Considering the various contributing factors in runoff it has been observed during a literature survey that integrated modelling approach needs to be considered. The traditional storm water network models are able to predict to a fair/acceptable degree of accuracy provided no interaction with receiving water (river, sea, canal etc), ground infiltration, treatment works etc. are assumed. When such interactions are significant then it becomes difficult to reproduce the actual flood extent using the traditional discrete modelling approach. As a result the correct flooding situation is very rarely addressed accurately. Since the development of spatially distributed hydrologic model the predictions have become more accurate at the cost of requiring more accurate spatial information.The integrated approach provides a greater understanding of performance of the entire catchment. It enables to identify the source of flow in the system, understand how it is conveyed and also its impact on the receiving body. It also confirms important pain points, hydraulic controls and the source of flooding which could not be easily understood with discrete modelling approach. This also enables the decision makers to identify solutions which can be spread throughout the catchment rather than being concentrated at single point where the problem exists. Thus it can be concluded from the literature survey that the representation of urban details can be a key differentiator to the successful understanding of flooding issue. The intent of this study is to accurately predict the runoff from impermeable areas from urban area in India. A representative area has been selected for which data was available and predictions have been made which are corroborated with the actual measured data.Keywords: runoff, urbanization, impermeable response, flooding
Procedia PDF Downloads 250188 Sorbitol Galactoside Synthesis Using β-Galactosidase Immobilized on Functionalized Silica Nanoparticles
Authors: Milica Carević, Katarina Banjanac, Marija ĆOrović, Ana Milivojević, Nevena Prlainović, Aleksandar Marinković, Dejan Bezbradica
Abstract:
Nowadays, considering the growing awareness of functional food beneficial effects on human health, due attention is dedicated to the research in the field of obtaining new prominent products exhibiting improved physiological and physicochemical characteristics. Therefore, different approaches to valuable bioactive compounds synthesis have been proposed. β-Galactosidase, for example, although mainly utilized as hydrolytic enzyme, proved to be a promising tool for these purposes. Namely, under the particular conditions, such as high lactose concentration, elevated temperatures and low water activities, reaction of galactose moiety transfer to free hydroxyl group of the alternative acceptor (e.g. different sugars, alcohols or aromatic compounds) can generate a wide range of potentially interesting products. Up to now, galacto-oligosaccharides and lactulose have attracted the most attention due to their inherent prebiotic properties. The goal of this study was to obtain a novel product sorbitol galactoside, using the similar reaction mechanism, namely transgalactosylation reaction catalyzed by β-galactosidase from Aspergillus oryzae. By using sugar alcohol (sorbitol) as alternative acceptor, a diverse mixture of potential prebiotics is produced, enabling its more favorable functional features. Nevertheless, an introduction of alternative acceptor into the reaction mixture contributed to the complexity of reaction scheme, since several potential reaction pathways were introduced. Therefore, the thorough optimization using response surface method (RSM), in order to get an insight into different parameter (lactose concentration, sorbitol to lactose molar ratio, enzyme concentration, NaCl concentration and reaction time) influences, as well as their mutual interactions on product yield and productivity, was performed. In view of product yield maximization, the obtained model predicted optimal lactose concentration 500 mM, the molar ratio of sobitol to lactose 9, enzyme concentration 0.76 mg/ml, concentration of NaCl 0.8M, and the reaction time 7h. From the aspect of productivity, the optimum substrate molar ratio was found to be 1, while the values for other factors coincide. In order to additionally, improve enzyme efficiency and enable its reuse and potential continual application, immobilization of β-galactosidase onto tailored silica nanoparticles was performed. These non-porous fumed silica nanoparticles (FNS)were chosen on the basis of their biocompatibility and non-toxicity, as well as their advantageous mechanical and hydrodinamical properties. However, in order to achieve better compatibility between enzymes and the carrier, modifications of the silica surface using amino functional organosilane (3-aminopropyltrimethoxysilane, APTMS) were made. Obtained support with amino functional groups (AFNS) enabled high enzyme loadings and, more importantly, extremely high expressed activities, approximately 230 mg proteins/g and 2100 IU/g, respectively. Moreover, this immobilized preparation showed high affinity towards sorbitol galactoside synthesis. Therefore, the findings of this study could provided a valuable contribution to the efficient production of physiologically active galactosides in immobilized enzyme reactors.Keywords: β-galactosidase, immobilization, silica nanoparticles, transgalactosylation
Procedia PDF Downloads 301187 Academia as Creator of Emerging, Innovative Communities of Practice and Learning
Authors: Francisco Julio Batle Lorente
Abstract:
The present paper aims at presenting a new category of role for academia: proactive creator/promoter of communities of practice in emerging areas of innovation. It is based in research among practitioners in three different areas: social entrepreneurship, alumni engaged in entrepreneurship and innovation, and digital nomads. The concept of CoP is related to an intentionally created space to share experiences and collectively reflect on the cases arising from practice. Such an endeavour is not contemplated in the literature on academic roles in an explicit way. The goal of the paper is providing a framework for this function and throw some light on the perception and priorities of members of emerging communities (78 alumni, 154 social entrepreneurs, and 231 digital nomads) regarding community, learning, engagement, and networking, areas in which the university can help and, by doing so, contributing to signal the emerging area and creating new opportunities for the academia. The research methodology was based in Survey research. It is a specific type of field study that involves the collection of data from a sample of elements drawn from a well-defined population through the use of a questionnaire. It was considered that survey research might be valuable to the present project and help outline the utility of various study designs and future projects with the emerging communities that are the object of the investigation. Open questions were used for different topics, as well as critical incident technique. It was used a standard technique for survey sampling and questionnaire design. Finally, it was defined a procedure for pretesting questionnaires and for data collection. The questionnaire was channelled by means of google forms. The results indicate that the members of emerging, innovative CoPs and learning such the ones that were selected for this investigation lack cohesion, inspiration, networking, opportunities for creation of social capital, opportunities for collaboration beyond their existing and close network. The opportunity that arises for the academia from proactively helping articulate CoP (and Communities of learning) are related to key elements of any CoP/ CoL: community construction approaches, technological infrastructure, benefits, participation issues and urgent challenges, trust, networking, technical ability/training/development and collaboration. Beyond training, other three areas (networking, collaboration and urgent challenges) were the ones in which the contribution of universities to the communities were considered more interesting and workable to practitioners. The analysis of the responses for the open questions related to perception of the universities offer options for terra incognita to be explored for universities (signalling new areas, establishing broader collaborations with research, government, media and corporations, attracting investment). Based on the findings from this research, there is some evidence that CoPs can offer a formal and informal method of professional and interprofessional development for member of any emerging and innovative community and can decrease social and professional isolation. The opportunity that it offers to academia can increase the entrepreneurial and engaged university identity. It also moves to academia into a realm of civic confrontation of present and future challenges in a more proactive way.Keywords: social innovation, new roles of academia, community of learning, community of practice
Procedia PDF Downloads 83186 Degradation of Diclofenac in Water Using FeO-Based Catalytic Ozonation in a Modified Flotation Cell
Authors: Miguel A. Figueroa, José A. Lara-Ramos, Miguel A. Mueses
Abstract:
Pharmaceutical residues are a section of emerging contaminants of anthropogenic origin that are present in a myriad of waters with which human beings interact daily and are starting to affect the ecosystem directly. Conventional waste-water treatment systems are not capable of degrading these pharmaceutical effluents because their designs cannot handle the intermediate products and biological effects occurring during its treatment. That is why it is necessary to hybridize conventional waste-water systems with non-conventional processes. In the specific case of an ozonation process, its efficiency highly depends on a perfect dispersion of ozone, long times of interaction of the gas-liquid phases and the size of the ozone bubbles formed through-out the reaction system. In order to increase the efficiency of these parameters, the use of a modified flotation cell has been proposed recently as a reactive system, which is used at an industrial level to facilitate the suspension of particles and spreading gas bubbles through the reactor volume at a high rate. The objective of the present work is the development of a mathematical model that can closely predict the kinetic rates of reactions taking place in the flotation cell at an experimental scale by means of identifying proper reaction mechanisms that take into account the modified chemical and hydrodynamic factors in the FeO-catalyzed Ozonation of Diclofenac aqueous solutions in a flotation cell. The methodology is comprised of three steps: an experimental phase where a modified flotation cell reactor is used to analyze the effects of ozone concentration and loading catalyst over the degradation of Diclofenac aqueous solutions. The performance is evaluated through an index of utilized ozone, which relates the amount of ozone supplied to the system per milligram of degraded pollutant. Next, a theoretical phase where the reaction mechanisms taking place during the experiments must be identified and proposed that details the multiple direct and indirect reactions the system goes through. Finally, a kinetic model is obtained that can mathematically represent the reaction mechanisms with adjustable parameters that can be fitted to the experimental results and give the model a proper physical meaning. The expected results are a robust reaction rate law that can simulate the improved results of Diclofenac mineralization on water using the modified flotation cell reactor. By means of this methodology, the following results were obtained: A robust reaction pathways mechanism showcasing the intermediates, free-radicals and products of the reaction, Optimal values of reaction rate constants that simulated Hatta numbers lower than 3 for the system modeled, degradation percentages of 100%, TOC (Total organic carbon) removal percentage of 69.9 only requiring an optimal value of FeO catalyst of 0.3 g/L. These results showed that a flotation cell could be used as a reactor in ozonation, catalytic ozonation and photocatalytic ozonation processes, since it produces high reaction rate constants and reduces mass transfer limitations (Ha > 3) by producing microbubbles and maintaining a good catalyst distribution.Keywords: advanced oxidation technologies, iron oxide, emergent contaminants, AOTS intensification
Procedia PDF Downloads 112185 Selected Macrophyte Populations Promotes Coupled Nitrification and Denitrification Function in Eutrophic Urban Wetland Ecosystem
Authors: Rupak Kumar Sarma, Ratul Saikia
Abstract:
Macrophytes encompass major functional group in eutrophic wetland ecosystems. As a key functional element of freshwater lakes, they play a crucial role in regulating various wetland biogeochemical cycles, as well as maintain the biodiversity at the ecosystem level. The high carbon-rich underground biomass of macrophyte populations may harbour diverse microbial community having significant potential in maintaining different biogeochemical cycles. The present investigation was designed to study the macrophyte-microbe interaction in coupled nitrification and denitrification, considering Deepor Beel Lake (a Ramsar conservation site) of North East India as a model eutrophic system. Highly eutrophic sites of Deepor Beel were selected based on sediment oxygen demand and inorganic phosphorus and nitrogen (P&N) concentration. Sediment redox potential and depth of the lake was chosen as the benchmark for collecting the plant and sediment samples. The average highest depth in winter (January 2016) and summer (July 2016) were recorded as 20ft (6.096m) and 35ft (10.668m) respectively. Both sampling depth and sampling seasons had the distinct effect on variation in macrophyte community composition. Overall, the dominant macrophytic populations in the lake were Nymphaea alba, Hydrilla verticillata, Utricularia flexuosa, Vallisneria spiralis, Najas indica, Monochoria hastaefolia, Trapa bispinosa, Ipomea fistulosa, Hygrorhiza aristata, Polygonum hydropiper, Eichhornia crassipes and Euryale ferox. There was a distinct correlation in the variation of major sediment physicochemical parameters with change in macrophyte community compositions. Quantitative estimation revealed an almost even accumulation of nitrate and nitrite in the sediment samples dominated by the plant species Eichhornia crassipes, Nymphaea alba, Hydrilla verticillata, Vallisneria spiralis, Euryale ferox and Monochoria hastaefolia, which might have signified a stable nitrification and denitrification process in the sites dominated by the selected aquatic plants. This was further examined by a systematic analysis of microbial populations through culture dependent and independent approach. Culture-dependent bacterial community study revealed the higher population of nitrifiers and denitrifiers in the sediment samples dominated by the six macrophyte species. However, culture-independent study with bacterial 16S rDNA V3-V4 metagenome sequencing revealed the overall similar type of bacterial phylum in all the sediment samples collected during the study. Thus, there might be the possibility of uneven distribution of nitrifying and denitrifying molecular markers among the sediment samples collected during the investigation. The diversity and abundance of the nitrifying and denitrifying molecular markers in the sediment samples are under investigation. Thus, the role of different aquatic plant functional types in microorganism mediated nitrogen cycle coupling could be screened out further from the present initial investigation.Keywords: denitrification, macrophyte, metagenome, microorganism, nitrification
Procedia PDF Downloads 173184 Lessons Learnt from Industry: Achieving Net Gain Outcomes for Biodiversity
Authors: Julia Baker
Abstract:
Development plays a major role in stopping biodiversity loss. But the ‘silo species’ protection of legislation (where certain species are protected while many are not) means that development can be ‘legally compliant’ and result in biodiversity loss. ‘Net Gain’ (NG) policies can help overcome this by making it an absolute requirement that development causes no overall loss of biodiversity and brings a benefit. However, offsetting biodiversity losses in one location with gains elsewhere is controversial because people suspect ‘offsetting’ to be an easy way for developers to buy their way out of conservation requirements. Yet the good practice principles (GPP) of offsetting provide several advantages over existing legislation for protecting biodiversity from development. This presentation describes the learning from implementing NG approaches based on GPP. It regards major upgrades of the UK’s transport networks, which involved removing vegetation in order to construct and safely operate new infrastructure. While low-lying habitats were retained, trees and other habitats disrupting the running or safety of transport networks could not. Consequently, achieving NG within the transport corridor was not possible and offsetting was required. The first ‘lessons learnt’ were on obtaining a commitment from business leaders to go beyond legislative requirements and deliver NG, and on the institutional change necessary to embed GPP within daily operations. These issues can only be addressed when the challenges that biodiversity poses for business are overcome. These challenges included: biodiversity cannot be measured easily unlike other sustainability factors like carbon and water that have metrics for target-setting and measuring progress; and, the mindset that biodiversity costs money and does not generate cash in return, which is the opposite of carbon or waste for example, where people can see how ‘sustainability’ actions save money. The challenges were overcome by presenting the GPP of NG as a cost-efficient solution to specific, critical risks facing the business that also boost industry recognition, and by using government-issued NG metrics to develop business-specific toolkits charting their NG progress whilst ensuring that NG decision-making was based on rich ecological data. An institutional change was best achieved by supporting, mentoring and training sustainability/environmental managers for these ‘frontline’ staff to embed GPP within the business. The second learning was from implementing the GPP where business partnered with local governments, wildlife groups and land owners to support their priorities for nature conservation, and where these partners had a say in decisions about where and how best to achieve NG. From this inclusive approach, offsetting contributed towards conservation priorities when all collaborated to manage trade-offs between: -Delivering ecologically equivalent offsets or compensating for losses of one type of biodiversity by providing another. -Achieving NG locally to the development whilst contributing towards national conservation priorities through landscape-level planning. -Not just protecting the extent and condition of existing biodiversity but ‘doing more’. -The multi-sector collaborations identified practical, workable solutions to ‘in perpetuity’. But key was strengthening linkages between biodiversity measures implemented for development and conservation work undertaken by local organizations so that developers support NG initiatives that really count.Keywords: biodiversity offsetting, development, nature conservation planning, net gain
Procedia PDF Downloads 195183 Benefits of Environmental Aids to Chronobiology Management and Its Impact on Depressive Mood in an Operational Setting
Authors: M. Trousselard, D. Steiler, C. Drogou, P. van-Beers, G. Lamour, S. N. Crosnier, O. Bouilland, P. Dubost, M. Chennaoui, D. Léger
Abstract:
According to published data, undersea navigation for long periods (nuclear-powered ballistic missile submarine, SSBN) constitutes an extreme environment in which crews are subjected to multiple stresses, including the absence of natural light, illuminance below 1,000 lux, and watch schedules that do not respect natural chronobiological rhythms, for a period of 60-80 days. These stresses seem clearly detrimental to the submariners’ sleep, with consequences for their affective (seasonal affective disorder-like) and cognitive functioning. In the long term, there are abundant publications regarding the consequences of sleep disruption for the occurrence of organic cardiovascular, metabolic, immunological or malignant diseases. It seems essential to propose countermeasures for the duration of the patrol in order to reduce the negative physiological effects on the sleep and mood of submariners. Light therapy, the preferred treatment for dysfunctions of the internal biological clock and the resulting seasonal depression, cannot be used without data to assist knowledge of submariners’ chronobiology (melatonin secretion curve) during patrols, given the unusual characteristics of their working environment. These data are not available in the literature. The aim of this project was to assess, in the course of two studies, the benefits of two environmental techniques for managing chronobiological stress: techniques for optimizing potential (TOP; study 1)3, an existing programme to help in the psychophysiological regulation of stress and sleep in the armed forces, and dawn and dusk simulators (DDS, study 2). For each experiment, psychological, physiological (sleep) or biological (melatonin secretion) data were collected on D20 and D50 of patrol. In the first experiment, we studied sleep and depressive distress in 19 submariners in an operational setting on board an SSBM during a first patrol, and assessed the impact of TOP on the quality of sleep and depressive distress in these same submariners over the course of a second patrol. The submariners were trained in TOP between the two patrols for a 2-month period, at a rate of 1 h of training per week, and assigned daily informal exercises. Results show moderate disruptions in sleep pattern and duration associated with the intensity of depressive distress. The use of TOP during the following patrol improved sleep and depressive mood only in submariners who regularly practiced the techniques. In light of these limited benefits, we assessed, in a second experiment, the benefits of DDS on chronobiology (daily secretion of melatonin) and depressive distress. Ninety submariners were randomly allocated to two groups, group 1 using DDS daily, and group 2 constituting the control group. Although the placebo effect was not controlled, results showed a beneficial effect on chronobiology and depressive mood for submariners with a morning chronotype. Conclusions: These findings demonstrate the difficulty of practicing the tools of psychophysiological management in real life. They raise the question of the subjects’ autonomy with respect to using aids that involve regular practice. It seems important to study autonomy in future studies, as a cognitive resource resulting from the interaction between internal positive resources and “coping” resources, to gain a better understanding of compliance problems.Keywords: chronobiology, light therapy, seasonal affective disorder, sleep, stress, stress management, submarine
Procedia PDF Downloads 456182 Managing Inter-Organizational Innovation Project: Systematic Review of Literature
Authors: Lamin B Ceesay, Cecilia Rossignoli
Abstract:
Inter-organizational collaboration is a growing phenomenon in both research and practice. The partnership between organizations enables firms to leverage external resources, experiences, and technology that lie with other firms. This collaborative practice is a source of improved business model performance, technological advancement, and increased competitive advantage for firms. However, the competitive intents, and even diverse institutional logics of firms, make inter-firm innovation-based partnership even more complex, and its governance more challenging. The purpose of this paper is to present a systematic review of research linking the inter-organizational relationship of firms with their innovation practice and specify the different project management issues and gaps addressed in previous research. To do this, we employed a systematic review of the literature on inter-organizational innovation using two complementary scholarly databases - ScienceDirect and Web of Science (WoS). Article scoping relies on the combination of keywords based on similar terms used in the literature:(1) inter-organizational relationship, (2) business network, (3) inter-firm project, and (4) innovation network. These searches were conducted in the title, abstract, and keywords of conceptual and empirical research papers done in English. Our search covers between 2010 to 2019. We applied several exclusion criteria including Papers published outside the years under the review, papers in a language other than English, papers neither listed in WoS nor ScienceDirect and papers that are not sharply related to the inter-organizational innovation-based partnership were removed. After all relevant search criteria were applied, a final list of 84 papers constitutes the data for this review. Our review revealed an increasing evolution of inter-organizational relationship research during the period under the review. The descriptive analysis of papers according to Journal outlets finds that International Journal of Project Management (IJPM), Journal of Industrial Marketing, Journal of Business Research (JBR), etc. are the leading journal outlets for research in the inter-organizational innovation project. The review also finds that Qualitative methods and quantitative approaches respectively are the leading research methods adopted by scholars in the field. However, literature review and conceptual papers constitute the least in the field. During the content analysis of the selected papers, we read the content of each paper and found that the selected papers try to address one of the three phenomena in inter-organizational innovation research: (1) project antecedents; (2) project management and (3) project performance outcomes. We found that these categories are not mutually exclusive, but rather interdependent. This categorization also helped us to organize the fragmented literature in the field. While a significant percentage of the literature discussed project management issues, we found fewer extant literature on project antecedents and performance. As a result of this, we organized the future research agenda addressed in several papers by linking them with the under-researched themes in the field, thus providing great potential to advance future research agenda especially, in the under-researched themes in the field. Finally, our paper reveals that research on inter-organizational innovation project is generally fragmented which hinders a better understanding of the field. Thus, this paper contributes to the understanding of the field by organizing and discussing the extant literature to advance the theory and application of inter-organizational relationship.Keywords: inter-organizational relationship, inter-firm collaboration, innovation projects, project management, systematic review
Procedia PDF Downloads 113181 Modeling Thermal Changes of Urban Blocks in Relation to the Landscape Structure and Configuration in Guilan Province
Authors: Roshanak Afrakhteh, Abdolrasoul Salman Mahini, Mahdi Motagh, Hamidreza Kamyab
Abstract:
Urban Heat Islands (UHIs) are distinctive urban areas characterized by densely populated central cores surrounded by less densely populated peripheral lands. These areas experience elevated temperatures, primarily due to impermeable surfaces and specific land use patterns. The consequences of these temperature variations are far-reaching, impacting the environment and society negatively, leading to increased energy consumption, air pollution, and public health concerns. This paper emphasizes the need for simplified approaches to comprehend UHI temperature dynamics and explains how urban development patterns contribute to land surface temperature variation. To illustrate this relationship, the study focuses on the Guilan Plain, utilizing techniques like principal component analysis and generalized additive models. The research centered on mapping land use and land surface temperature in the low-lying area of Guilan province. Satellite data from Landsat sensors for three different time periods (2002, 2012, and 2021) were employed. Using eCognition software, a spatial unit known as a "city block" was utilized through object-based analysis. The study also applied the normalized difference vegetation index (NDVI) method to estimate land surface radiance. Predictive variables for urban land surface temperature within residential city blocks were identified categorized as intrinsic (related to the block's structure) and neighboring (related to adjacent blocks) variables. Principal Component Analysis (PCA) was used to select significant variables, and a Generalized Additive Model (GAM) approach, implemented using R's mgcv package, modeled the relationship between urban land surface temperature and predictor variables.Notable findings included variations in urban temperature across different years attributed to environmental and climatic factors. Block size, shared boundary, mother polygon area, and perimeter-to-area ratio were identified as main variables for the generalized additive regression model. This model showed non-linear relationships, with block size, shared boundary, and mother polygon area positively correlated with temperature, while the perimeter-to-area ratio displayed a negative trend. The discussion highlights the challenges of predicting urban surface temperature and the significance of block size in determining urban temperature patterns. It also underscores the importance of spatial configuration and unit structure in shaping urban temperature patterns. In conclusion, this study contributes to the growing body of research on the connection between land use patterns and urban surface temperature. Block size, along with block dispersion and aggregation, emerged as key factors influencing urban surface temperature in residential areas. The proposed methodology enhances our understanding of parameter significance in shaping urban temperature patterns across various regions, particularly in Iran.Keywords: urban heat island, land surface temperature, LST modeling, GAM, Gilan province
Procedia PDF Downloads 73180 Achieving the Status of Total Sanitation in the Rural Nepalese Context: A Case Study from Amarapuri, Nepal
Authors: Ram Chandra Sah
Abstract:
Few years back, naturally a very beautiful country Nepal was facing a lot of problems related to the practice of open defecation (having no toilet) by almost 98% people of the country. Now, the scenario is changed. Government of Nepal set the target of achieving the situation of basic level sanitation (toilets) facilities by 2017 AD for which the Sanitation and Hygiene Master Plan (SHMP) was brought in 2011 AD with the major beauty as institutional set up formation, local formal authority leadership, locally formulated strategic plan; partnership, harmonized and coordinated approach to working; no subsidy or support at a blanket level, community and local institutions or organizations mobilization approaches. Now, the Open Defecation Free (ODF) movement in the country is at a full swing. The Sanitation and Hygiene Master Plan (SHMP) has clearly defined Total Sanitation which is accepted to be achieved if all the households of the related boundary have achieved the 6 indicators such as the access and regular use of toilet(s), regular use of soap and water at the critical moments, regular practice of use of food hygiene behavior, regular practice of use of water hygiene behavior including household level purification of locally available drinking water, maintenance of regular personal hygiene with household level waste management and the availability of the state of overall clean environment at the concerned level of boundary. Nepal has 3158 Village Development Committees (VDC's) in the rural areas. Amarapuri VDC was selected for the purpose of achieving Total Sanitation. Based on the SHMP; different methodologies such as updating of Village Water Sanitation and Hygiene Coordination Committee (V-WASH-CC), Total Sanitation team formation including one volunteer for each indicator, campaigning through settlement meetings, midterm evaluation which revealed the need of ward level 45 (5 for all 9 wards) additional volunteers, ward wise awareness creation with the help of the volunteers, informative notice boards and hoarding boards with related messages at important locations, management of separate waste disposal rings for decomposable and non-decomposable wastes, related messages dissemination through different types of local cultural programs, public toilets construction and management by community level; mobilization of local schools, offices and health posts; reward and recognition to contributors etc. were adopted for achieving 100 % coverage of each indicator. The VDC was in a very worse situation in 2010 with just 50, 30, 60, 60, 40, 30 percent coverage of the respective indicators and became the first VDC of the country declared with Total Sanitation. The expected result of 100 percent coverage of all the indicators was achieved in 2 years 10 months and 19 days. Experiences of Amarapuri were replicated successfully in different parts of the country and many VDC's have been declared with the achievement of Total Sanitation. Thus, Community Mobilized Total Sanitation Movement in Nepal has supported a lot for achieving a Total Sanitation situation of the country with a minimal cost and it is believed that the approach can be very useful for other developing or under developed countries of the world.Keywords: community mobilized, open defecation free, sanitation and hygiene master plan, total sanitation
Procedia PDF Downloads 199179 A Framework for Automated Nuclear Waste Classification
Authors: Seonaid Hume, Gordon Dobie, Graeme West
Abstract:
Detecting and localizing radioactive sources is a necessity for safe and secure decommissioning of nuclear facilities. An important aspect for the management of the sort-and-segregation process is establishing the spatial distributions and quantities of the waste radionuclides, their type, corresponding activity, and ultimately classification for disposal. The data received from surveys directly informs decommissioning plans, on-site incident management strategies, the approach needed for a new cell, as well as protecting the workforce and the public. Manual classification of nuclear waste from a nuclear cell is time-consuming, expensive, and requires significant expertise to make the classification judgment call. Also, in-cell decommissioning is still in its relative infancy, and few techniques are well-developed. As with any repetitive and routine tasks, there is the opportunity to improve the task of classifying nuclear waste using autonomous systems. Hence, this paper proposes a new framework for the automatic classification of nuclear waste. This framework consists of five main stages; 3D spatial mapping and object detection, object classification, radiological mapping, source localisation based on gathered evidence and finally, waste classification. The first stage of the framework, 3D visual mapping, involves object detection from point cloud data. A review of related applications in other industries is provided, and recommendations for approaches for waste classification are made. Object detection focusses initially on cylindrical objects since pipework is significant in nuclear cells and indeed any industrial site. The approach can be extended to other commonly occurring primitives such as spheres and cubes. This is in preparation of stage two, characterizing the point cloud data and estimating the dimensions, material, degradation, and mass of the objects detected in order to feature match them to an inventory of possible items found in that nuclear cell. Many items in nuclear cells are one-offs, have limited or poor drawings available, or have been modified since installation, and have complex interiors, which often and inadvertently pose difficulties when accessing certain zones and identifying waste remotely. Hence, this may require expert input to feature match objects. The third stage, radiological mapping, is similar in order to facilitate the characterization of the nuclear cell in terms of radiation fields, including the type of radiation, activity, and location within the nuclear cell. The fourth stage of the framework takes the visual map for stage 1, the object characterization from stage 2, and radiation map from stage 3 and fuses them together, providing a more detailed scene of the nuclear cell by identifying the location of radioactive materials in three dimensions. The last stage involves combining the evidence from the fused data sets to reveal the classification of the waste in Bq/kg, thus enabling better decision making and monitoring for in-cell decommissioning. The presentation of the framework is supported by representative case study data drawn from an application in decommissioning from a UK nuclear facility. This framework utilises recent advancements of the detection and mapping capabilities of complex radiation fields in three dimensions to make the process of classifying nuclear waste faster, more reliable, cost-effective and safer.Keywords: nuclear decommissioning, radiation detection, object detection, waste classification
Procedia PDF Downloads 200178 A Digital Environment for Developing Mathematical Abilities in Children with Autism Spectrum Disorder
Authors: M. Isabel Santos, Ana Breda, Ana Margarida Almeida
Abstract:
Research on academic abilities of individuals with autism spectrum disorder (ASD) underlines the importance of mathematics interventions. Yet the proposal of digital applications for children and youth with ASD continues to attract little attention, namely, regarding the development of mathematical reasoning, being the use of the digital technologies an area of great interest for individuals with this disorder and its use is certainly a facilitative strategy in the development of their mathematical abilities. The use of digital technologies can be an effective way to create innovative learning opportunities to these students and to develop creative, personalized and constructive environments, where they can develop differentiated abilities. The children with ASD often respond well to learning activities involving information presented visually. In this context, we present the digital Learning Environment on Mathematics for Autistic children (LEMA) that was a research project conducive to a PhD in Multimedia in Education and was developed by the Thematic Line Geometrix, located in the Department of Mathematics, in a collaboration effort with DigiMedia Research Center, of the Department of Communication and Art (University of Aveiro, Portugal). LEMA is a digital mathematical learning environment which activities are dynamically adapted to the user’s profile, towards the development of mathematical abilities of children aged 6–12 years diagnosed with ASD. LEMA has already been evaluated with end-users (both students and teacher’s experts) and based on the analysis of the collected data readjustments were made, enabling the continuous improvement of the prototype, namely considering the integration of universal design for learning (UDL) approaches, which are of most importance in ASD, due to its heterogeneity. The learning strategies incorporated in LEMA are: (i) provide options to custom choice of math activities, according to user’s profile; (ii) integrates simple interfaces with few elements, presenting only the features and content needed for the ongoing task; (iii) uses a simple visual and textual language; (iv) uses of different types of feedbacks (auditory, visual, positive/negative reinforcement, hints with helpful instructions including math concept definitions, solved math activities using split and easier tasks and, finally, the use of videos/animations that show a solution to the proposed activity); (v) provides information in multiple representation, such as text, video, audio and image for better content and vocabulary understanding in order to stimulate, motivate and engage users to mathematical learning, also helping users to focus on content; (vi) avoids using elements that distract or interfere with focus and attention; (vii) provides clear instructions and orientation about tasks to ease the user understanding of the content and the content language, in order to stimulate, motivate and engage the user; and (viii) uses buttons, familiarly icons and contrast between font and background. Since these children may experience little sensory tolerance and may have an impaired motor skill, besides the user to have the possibility to interact with LEMA through the mouse (point and click with a single button), the user has the possibility to interact with LEMA through Kinect device (using simple gesture moves).Keywords: autism spectrum disorder, digital technologies, inclusion, mathematical abilities, mathematical learning activities
Procedia PDF Downloads 116177 The Socio-Economic Impact of the English Leather Glove Industry from the 17th Century to Its Recent Decline
Authors: Frances Turner
Abstract:
Gloves are significant physical objects, being one of the oldest forms of dress. Glove culture is part of every facet of life; its extraordinary history encompasses practicality, and symbolism reflecting a wide range of social practices. The survival of not only the gloves but associated articles enables the possibility to analyse real lives, however so far this area has been largely neglected. Limited information is available to students, researchers, or those involved with the design and making of gloves. There are several museums and independent collectors in England that hold collections of gloves (some from as early as 16th century), machinery, tools, designs and patterns, marketing materials and significant archives which demonstrate the rich heritage of English glove design and manufacturing, being of national significance and worthy of international interest. Through a research glove network which now exists thanks to research grant funding, there is potential for the holders of glove collections to make connections and explore links between these resources to promote a stronger understanding of the significance, breadth and heritage of the English glove industry. The network takes an interdisciplinary approach to bring together interested parties from academia, museums and manufacturing, with expert knowledge of the production, collections, conservation and display of English leather gloves. Academics from diverse arts and humanities disciplines benefit from the opportunities to share research and discuss ideas with network members from non-academic contexts including museums and heritage organisations, industry, and contemporary designers. The fragmented collections when considered in entirety provide an overview of English glove making since earliest times and those who wore them. This paper makes connections and explores links between these resources to promote a stronger understanding of the significance, breadth and heritage of the English Glove industry. The following areas are explored: current content and status of the individual museum collections, potential links, sharing of information histories, social and cultural and relationship to history of fashion design, manufacturing and materials, approaches to maintenance and conservation, access to the collections and strategies for future understanding of their national significance. The facilitation of knowledge exchange and exploration of the collections through the network informs organisations’ future strategies for the maintenance, access and conservation of their collections. By involving industry in the network, it is possible to ensure a contemporary perspective on glove-making in addition to the input from heritage partners. The slow fashion movement and awareness of artisan craft and how these can be preserved and adopted for glove and accessory design is addressed. Artisan leather glove making was a skilled and significant industry in England that has now declined to the point where there is little production remaining utilising the specialist skills that have hardly changed since earliest times. This heritage will be identified and preserved for future generations of the rich cultural history of gloves may be lost.Keywords: artisan glove-making skills, English leather gloves, glove culture, the glove network
Procedia PDF Downloads 129176 Childhood Sensory Sensitivity: A Potential Precursor to Borderline Personality Disorder
Authors: Valerie Porr, Sydney A. DeCaro
Abstract:
TARA for borderline personality disorder (BPD), an education and advocacy organization, helps families to compassionately and effectively deal with troubling BPD behaviors. Our psychoeducational programs focus on understanding underlying neurobiological features of BPD and evidence-based methodology integrating dialectical behavior therapy (DBT) and mentalization based therapy (MBT,) clarifying the inherent misunderstanding of BPD behaviors and improving family communication. TARA4BPD conducts online surveys, workshops, and topical webinars. For over 25 years, we have collected data from BPD helpline callers. This data drew our attention to particular childhood idiosyncrasies that seem to characterize many of the children who later met the criteria for BPD. The idiosyncrasies we observed, heightened sensory sensitivity and hypervigilance, were included in Adolf Stern’s 1938 definition of “Borderline.” This aspect of BPD has not been prioritized by personality disorder researchers, presently focused on emotion processing and social cognition in BPD. Parents described sleep reversal problems in infants who, early on, seem to exhibit dysregulation in circadian rhythm. Families describe children as supersensitive to sensory sensations, such as specific sounds, heightened sense of smell, taste, textures of foods, and an inability to tolerate various fabrics textures (i.e., seams in socks). They also exhibit high sensitivity to particular words and voice tones. Many have alexithymia and dyslexia. These children are either hypo- or hypersensitive to sensory sensations, including pain. Many suffer from fibromyalgia. BPD reactions to pain have been studied (C. Schmahl) and confirm the existence of hyper and hypo-reactions to pain stimuli in people with BPD. To date, there is little or no data regarding what comprises a normative range of sensitivity in infants and children. Many parents reported that their children were tested or treated for sensory processing disorder (SPD), learning disorders, and ADHD. SPD is not included in the DSM and is treated by occupational therapists. The overwhelming anecdotal data from thousands of parents of children who later met criteria for BPD led TARA4BPD to develop a sensitivity survey to develop evidence of the possible role of early sensory perception problems as a pre-cursor to BPD, hopefully initiating new directions in BPD research. At present, the research community seems unaware of the role supersensory sensitivity might play as an early indicator of BPD. Parents' observations of childhood sensitivity obtained through family interviews and results of an extensive online survey on sensory responses across various ages of development will be presented. People with BPD suffer from a sense of isolation and otherness that often results in later interpersonal difficulties. Early identification of supersensitive children while brain circuits are developing might decrease the development of social interaction deficits such as rejection sensitivity, self-referential processes, and negative bias, hallmarks of BPD, ultimately minimizing the maladaptive methods of coping with distress that characterizes BPD. Family experiences are an untapped resource for BPD research. It is hoped that this data will give family observations the critical credibility to inform future treatment and research directions.Keywords: alexithymia, dyslexia, hypersensitivity, sensory processing disorder
Procedia PDF Downloads 201175 Characterizing the Rectification Process for Designing Scoliosis Braces: Towards Digital Brace Design
Authors: Inigo Sanz-Pena, Shanika Arachchi, Dilani Dhammika, Sanjaya Mallikarachchi, Jeewantha S. Bandula, Alison H. McGregor, Nicolas Newell
Abstract:
The use of orthotic braces for adolescent idiopathic scoliosis (AIS) patients is the most common non-surgical treatment to prevent deformity progression. The traditional method to create an orthotic brace involves casting the patient’s torso to obtain a representative geometry, which is then rectified by an orthotist to the desired geometry of the brace. Recent improvements in 3D scanning technologies, rectification software, CNC, and additive manufacturing processes have given the possibility to compliment, or in some cases, replace manual methods with digital approaches. However, the rectification process remains dependent on the orthotist’s skills. Therefore, the rectification process needs to be carefully characterized to ensure that braces designed through a digital workflow are as efficient as those created using a manual process. The aim of this study is to compare 3D scans of patients with AIS against 3D scans of both pre- and post-rectified casts that have been manually shaped by an orthotist. Six AIS patients were recruited from the Ragama Rehabilitation Clinic, Colombo, Sri Lanka. All patients were between 10 and 15 years old, were skeletally immature (Risser grade 0-3), and had Cobb angles between 20-45°. Seven spherical markers were placed at key anatomical locations on each patient’s torso and on the pre- and post-rectified molds so that distances could be reliably measured. 3D scans were obtained of 1) the patient’s torso and pelvis, 2) the patient’s pre-rectification plaster mold, and 3) the patient’s post-rectification plaster mold using a Structure Sensor Mark II 3D scanner (Occipital Inc., USA). 3D stick body models were created for each scan to represent the distances between anatomical landmarks. The 3D stick models were used to analyze the changes in position and orientation of the anatomical landmarks between scans using Blender open-source software. 3D Surface deviation maps represented volume differences between the scans using CloudCompare open-source software. The 3D stick body models showed changes in the position and orientation of thorax anatomical landmarks between the patient and the post-rectification scans for all patients. Anatomical landmark position and volume differences were seen between 3D scans of the patient’s torsos and the pre-rectified molds. Between the pre- and post-rectified molds, material removal was consistently seen on the anterior side of the thorax and the lateral areas below the ribcage. Volume differences were seen in areas where the orthotist planned to place pressure pads (usually at the trochanter on the side to which the lumbar curve was tilted (trochanter pad), at the lumbar apical vertebra (lumbar pad), on the rib connected to the apical vertebrae at the mid-axillary line (thoracic pad), and on the ribs corresponding to the upper thoracic vertebra (axillary extension pad)). The rectification process requires the skill and experience of an orthotist; however, this study demonstrates that the brace shape, location, and volume of material removed from the pre-rectification mold can be characterized and quantified. Results from this study can be fed into software that can accelerate the brace design process and make steps towards the automated digital rectification process.Keywords: additive manufacturing, orthotics, scoliosis brace design, sculpting software, spinal deformity
Procedia PDF Downloads 145174 Promoting Physical Activity through Urban Active Environments: Learning from Practice and Policy Implementation in the EU Space Project
Authors: Rosina U. Ndukwe, Diane Crone, Nick Cavill
Abstract:
Active transport (i.e. walking to school, cycle to work schemes etc.) is an effective approach with multiple social and environmental benefits for transforming urban environments into active urban environments. Although walking and cycling often remain on the margins of urban planning and infrastructure, there are new approaches emerging, along with policy intervention relevant for the creation of sustainable urban active environments conductive to active travel, increasing physical activity levels of involved communities and supporting social inclusion through more active participation. SPAcE - Supporting Policy and Action for Active Environments is a 3 year Erasmus+ project that aims to integrate active transport programmes into public policy across the EU. SPAcE focuses on cities/towns with recorded low physical activity levels to support the development of active environments in 5 sites: Latvia [Tukums], Italy [Palermo], Romania [Brasov], Spain [Castilla-La Mancha] and Greece [Trikala]. The first part of the project involved a review of good practice including case studies from across the EU and project partner countries. This has resulted in the first output from the project, an evidence of good practice summary with case study examples. In the second part of the project, working groups across the 5 sites have carried out co-production to develop Urban Active Environments (UActivE) Action Plans aimed at influencing policy and practice for increasing physical activity primarily through the use of cycling and walking. Action plans are based on international evidence and guidance for healthy urban planning. Remaining project partners include Universities (Gloucestershire, Oxford, Zurich, Thessaly) and Fit for Life programme (National physical activity promotion program, Finland) who provide support and advice incorporating current evidence, healthy urban planning and mentoring. Cooperation and co-production with public health professionals, local government officers, education authorities and transport agencies has been a key approach of the project. The third stage of the project has involved training partners in the WHO HEAT tool to support the implementation of the Action Plans. Project results show how multi-agency, transnational collaboration can produce real-life Action Plans in five EU countries, based on published evidence, real-life experience, consultation and collaborative working with other organisations across the EU. Learning from the processes adopted within this project will demonstrate how public health, local government and transport agencies across the EU, can work together to create healthy environments that have the aim of facilitating active behaviour, even in times of constrained public budgets. The SPAcE project has captured both the challenges and solutions for increasing population physical activity levels, health and wellness in urban spaces and translating evidence into policy and practice ensuring innovation at policy level. Funding acknowledgment: SPAcE (www.activeenvironments.eu) is co-funded by the Sport action of the ERASMUS+ programme.Keywords: action plans, active transport, SPAcE, UActivE urban active environments, walking and cycling
Procedia PDF Downloads 264173 Strengths Profiling: An Alternative Approach to Assessing Character Strengths Based on Personal Construct Psychology
Authors: Sam J. Cooley, Mary L. Quinton, Benjamin J. Parry, Mark J. G. Holland, Richard J. Whiting, Jennifer Cumming
Abstract:
Practitioners draw attention to people’s character strengths to promote empowerment and well-being. This paper explores the possibility that existing approaches for assessing character strengths (e.g., the Values in Action survey; VIA-IS) could be even more autonomy supportive and empowering when combined with strengths profiling, an ideographic tool informed by personal construct theory (PCT). A PCT approach ensures that: (1) knowledge is co-created (i.e., the practitioner is not seen as the ‘expert’ who leads the process); (2) individuals are not required to ‘fit’ within a prescribed list of characteristics; and (3) individuals are free to use their own terminology and interpretations. A combined Strengths Profiling and VIA approach was used in a sample of homeless youth (aged 16-25) who are commonly perceived as ‘hard-to-engage’ through traditional forms of assessment. Strengths Profiling was completed face-to-face in small groups. Participants (N = 116) began by listing a variety of personally meaningful characteristics. Participants gave each characteristic a score out of ten for how important it was to them (1 = not so important; 10 = very important), their ideal competency, and their current competency (1 = poor; 10 = excellent). A discrepancy score was calculated for each characteristic (discrepancy score = ideal score - current score x importance), whereby a lower discrepancy score indicated greater satisfaction. Strengths Profiling was used at the beginning and end of a 10-week positive youth development programme. Experiences were captured through video diary room entries made by participants and through reflective notes taken by the facilitators. Participants were also asked to complete a pre-and post-programme questionnaire, measuring perceptions of well-being, self-worth, and resilience. All of the young people who attended the strengths profiling session agreed to complete a profile, and the majority became highly engaged in the process. Strengths profiling was found to be an autonomy supportive and empowering experience, with each participant identifying an average of 10 character strengths (M = 10.27, SD = 3.23). In total, 215 different character strengths were identified, each with varying terms and definitions used, which differed greatly between participants and demonstrated the value in soliciting personal constructs. Using the participants’ definitions, 98% of characteristics were categorized deductively into the VIA framework. Bravery, perseverance, and hope were the character strengths that featured most, whilst temperance and courage received the highest discrepancy scores. Discrepancy scores were negatively correlated with well-being, self-worth, and resilience, and meaningful improvements were recorded following the intervention. These findings support the use of strengths profiling as a theoretically-driven and novel way to engage disadvantaged youth in identifying and monitoring character strengths. When young people are given the freedom to express their own characteristics, the resulting terminologies extend beyond the language used in existing frameworks. This added freedom and control over the process of strengths identification encouraged youth to take ownership over their profiles and apply their strengths. In addition, the ability to transform characteristics post hoc into the VIA framework means that strengths profiling can be used to explore aggregated/nomothetic hypotheses, whilst still benefiting from its ideographic roots.Keywords: ideographic, nomothetic, positive youth development, VIA-IS, assessment, homeless youth
Procedia PDF Downloads 200172 Semi-Supervised Learning for Spanish Speech Recognition Using Deep Neural Networks
Authors: B. R. Campomanes-Alvarez, P. Quiros, B. Fernandez
Abstract:
Automatic Speech Recognition (ASR) is a machine-based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker or an audio file, analyzes it using algorithms, and produces an output in the form of a text. Some speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian Mixture Models (GMMs) to determine how well each state of each HMM fits a short window of frames of coefficients that represents the acoustic input. Another way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition systems. Acoustic models for state-of-the-art ASR systems are usually training on massive amounts of data. However, audio files with their corresponding transcriptions can be difficult to obtain, especially in the Spanish language. Hence, in the case of these low-resource scenarios, building an ASR model is considered as a complex task due to the lack of labeled data, resulting in an under-trained system. Semi-supervised learning approaches arise as necessary tasks given the high cost of transcribing audio data. The main goal of this proposal is to develop a procedure based on acoustic semi-supervised learning for Spanish ASR systems by using DNNs. This semi-supervised learning approach consists of: (a) Training a seed ASR model with a DNN using a set of audios and their respective transcriptions. A DNN with a one-hidden-layer network was initialized; increasing the number of hidden layers in training, to a five. A refinement, which consisted of the weight matrix plus bias term and a Stochastic Gradient Descent (SGD) training were also performed. The objective function was the cross-entropy criterion. (b) Decoding/testing a set of unlabeled data with the obtained seed model. (c) Selecting a suitable subset of the validated data to retrain the seed model, thereby improving its performance on the target test set. To choose the most precise transcriptions, three confidence scores or metrics, regarding the lattice concept (based on the graph cost, the acoustic cost and a combination of both), was performed as selection technique. The performance of the ASR system will be calculated by means of the Word Error Rate (WER). The test dataset was renewed in order to extract the new transcriptions added to the training dataset. Some experiments were carried out in order to select the best ASR results. A comparison between a GMM-based model without retraining and the DNN proposed system was also made under the same conditions. Results showed that the semi-supervised ASR-model based on DNNs outperformed the GMM-model, in terms of WER, in all tested cases. The best result obtained an improvement of 6% relative WER. Hence, these promising results suggest that the proposed technique could be suitable for building ASR models in low-resource environments.Keywords: automatic speech recognition, deep neural networks, machine learning, semi-supervised learning
Procedia PDF Downloads 339171 Closing the Gap: Efficient Voxelization with Equidistant Scanlines and Gap Detection
Authors: S. Delgado, C. Cerrada, R. S. Gómez
Abstract:
This research introduces an approach to voxelizing the surfaces of triangular meshes with efficiency and accuracy. Our method leverages parallel equidistant scan-lines and introduces a Gap Detection technique to address the limitations of existing approaches. We present a comprehensive study showcasing the method's effectiveness, scalability, and versatility in different scenarios. Voxelization is a fundamental process in computer graphics and simulations, playing a pivotal role in applications ranging from scientific visualization to virtual reality. Our algorithm focuses on enhancing the voxelization process, especially for complex models and high resolutions. One of the major challenges in voxelization in the Graphics Processing Unit (GPU) is the high cost of discovering the same voxels multiple times. These repeated voxels incur in costly memory operations with no useful information. Our scan-line-based method ensures that each voxel is detected exactly once when processing the triangle, enhancing performance without compromising the quality of the voxelization. The heart of our approach lies in the use of parallel, equidistant scan-lines to traverse the interiors of triangles. This minimizes redundant memory operations and avoids revisiting the same voxels, resulting in a significant performance boost. Moreover, our method's computational efficiency is complemented by its simplicity and portability. Written as a single compute shader in Graphics Library Shader Language (GLSL), it is highly adaptable to various rendering pipelines and hardware configurations. To validate our method, we conducted extensive experiments on a diverse set of models from the Stanford repository. Our results demonstrate not only the algorithm's efficiency, but also its ability to produce 26 tunnel free accurate voxelizations. The Gap Detection technique successfully identifies and addresses gaps, ensuring consistent and visually pleasing voxelized surfaces. Furthermore, we introduce the Slope Consistency Value metric, quantifying the alignment of each triangle with its primary axis. This metric provides insights into the impact of triangle orientation on scan-line based voxelization methods. It also aids in understanding how the Gap Detection technique effectively improves results by targeting specific areas where simple scan-line-based methods might fail. Our research contributes to the field of voxelization by offering a robust and efficient approach that overcomes the limitations of existing methods. The Gap Detection technique fills a critical gap in the voxelization process. By addressing these gaps, our algorithm enhances the visual quality and accuracy of voxelized models, making it valuable for a wide range of applications. In conclusion, "Closing the Gap: Efficient Voxelization with Equidistant Scan-lines and Gap Detection" presents an effective solution to the challenges of voxelization. Our research combines computational efficiency, accuracy, and innovative techniques to elevate the quality of voxelized surfaces. With its adaptable nature and valuable innovations, this technique could have a positive influence on computer graphics and visualization.Keywords: voxelization, GPU acceleration, computer graphics, compute shaders
Procedia PDF Downloads 72170 Texture Characteristics and Depositional Environment of the Lower Mahi River Sediment, Mainland Gujarat, India
Authors: Shazi Farooqui, Anupam Sharma
Abstract:
The Mahi River (~600km long) is an important west flowing the river of Central India. It originates in Madhya Pradesh and starts flowing in NW direction and enters into the state of Rajasthan. It flows across southern Rajasthan and then enters into Gujarat and finally debouches in the Gulf of Cambay. In Gujarat state, it flows through all four geomorphic zones i.e. eastern upland zone, shallow buried piedmont zone, alluvial zone and coastal zone. In lower reaches and particularly when it is flowing under the coastal regime, it provides an opportunity to study – 1. Land–Sea interaction and role of relative sea level changes, 2. Coastal/estuarine geological process, 3. Landscape evolution in marginal areas and so on. The Late Quaternary deposits of Mainland Gujarat is appreciably studied by Chamyal and his group of MS University of Baroda, and they have established that the 30-35m thick sediment package of the Mainland Gujarat is comprised of marine, fluvial and aeolian sediments. It is also established that in the estuarine zone, the upper few meter thick sediments package is of marine nature. However, its thickness, characters and the depositional environment including the role of climate and tectonics is still not clearly defined. To understand few aspects of the above mentioned, in the present study, a 17m subsurface sediment core has been retrieved from the estuarine zone of Mahi river basin. The Multiproxy studies which include the textural analysis (grain size), Loss on ignition (LOI), Bulk and clay mineralogy and geochemical studies have been carried out. In the entire sedimentary sequence, the grain size largely varies from coarse sand to clay; however, a solitary gravel bed is also noticed. The lower part (depth 9-17m), is mainly comprised of sub equal proportion of sand and silt. The sediments mainly have bimodal and leptokurtic distribution and deposited in alternate sand-silt package, probably indicating flood deposits. Relatively low moisture (1.8%) and organic carbon (2.4%) with increased carbonate values (12%) indicate that conditions must have to remain oxidizing. The middle part (depth 9–6m) has a 1m thick gravel bed at the bottom and overlain by coarse sand to very fine sand showing fining upward sequence. The presence of gravel bed suggests some kind of tectonic activity resulting into change in base level or enhanced precipitation in the catchment region. The upper part (depth 6–0m; top part of sequence) mainly comprised of fine sand to silt size grains (with appreciable clay content). The sediment of this part is Unimodal and very leptokurtic in nature suggesting wave and winnowing process and deposited in low energy suspension environment. This part has relatively high moisture (2.1%) and organic carbon (2.7%) with decreased carbonate content (4.2%) indicating change in the depositional environment probably under estuarine conditions. The presence of chlorite along with smectite clay mineral further supports the significant marine contribution in the formation of upper part of the sequence.Keywords: grain size, statistical analysis, clay minerals, late quaternary, LOI
Procedia PDF Downloads 181