Search results for: model tree
1019 Understanding the Qualitative Nature of Product Reviews by Integrating Text Processing Algorithm and Usability Feature Extraction
Authors: Cherry Yieng Siang Ling, Joong Hee Lee, Myung Hwan Yun
Abstract:
The quality of a product to be usable has become the basic requirement in consumer’s perspective while failing the requirement ends up the customer from not using the product. Identifying usability issues from analyzing quantitative and qualitative data collected from usability testing and evaluation activities aids in the process of product design, yet the lack of studies and researches regarding analysis methodologies in qualitative text data of usability field inhibits the potential of these data for more useful applications. While the possibility of analyzing qualitative text data found with the rapid development of data analysis studies such as natural language processing field in understanding human language in computer, and machine learning field in providing predictive model and clustering tool. Therefore, this research aims to study the application capability of text processing algorithm in analysis of qualitative text data collected from usability activities. This research utilized datasets collected from LG neckband headset usability experiment in which the datasets consist of headset survey text data, subject’s data and product physical data. In the analysis procedure, which integrated with the text-processing algorithm, the process includes training of comments onto vector space, labeling them with the subject and product physical feature data, and clustering to validate the result of comment vector clustering. The result shows 'volume and music control button' as the usability feature that matches best with the cluster of comment vectors where centroid comments of a cluster emphasized more on button positions, while centroid comments of the other cluster emphasized more on button interface issues. When volume and music control buttons are designed separately, the participant experienced less confusion, and thus, the comments mentioned only about the buttons' positions. While in the situation where the volume and music control buttons are designed as a single button, the participants experienced interface issues regarding the buttons such as operating methods of functions and confusion of functions' buttons. The relevance of the cluster centroid comments with the extracted feature explained the capability of text processing algorithms in analyzing qualitative text data from usability testing and evaluations.Keywords: usability, qualitative data, text-processing algorithm, natural language processing
Procedia PDF Downloads 2851018 Robust Electrical Segmentation for Zone Coherency Delimitation Base on Multiplex Graph Community Detection
Authors: Noureddine Henka, Sami Tazi, Mohamad Assaad
Abstract:
The electrical grid is a highly intricate system designed to transfer electricity from production areas to consumption areas. The Transmission System Operator (TSO) is responsible for ensuring the efficient distribution of electricity and maintaining the grid's safety and quality. However, due to the increasing integration of intermittent renewable energy sources, there is a growing level of uncertainty, which requires a faster responsive approach. A potential solution involves the use of electrical segmentation, which involves creating coherence zones where electrical disturbances mainly remain within the zone. Indeed, by means of coherent electrical zones, it becomes possible to focus solely on the sub-zone, reducing the range of possibilities and aiding in managing uncertainty. It allows faster execution of operational processes and easier learning for supervised machine learning algorithms. Electrical segmentation can be applied to various applications, such as electrical control, minimizing electrical loss, and ensuring voltage stability. Since the electrical grid can be modeled as a graph, where the vertices represent electrical buses and the edges represent electrical lines, identifying coherent electrical zones can be seen as a clustering task on graphs, generally called community detection. Nevertheless, a critical criterion for the zones is their ability to remain resilient to the electrical evolution of the grid over time. This evolution is due to the constant changes in electricity generation and consumption, which are reflected in graph structure variations as well as line flow changes. One approach to creating a resilient segmentation is to design robust zones under various circumstances. This issue can be represented through a multiplex graph, where each layer represents a specific situation that may arise on the grid. Consequently, resilient segmentation can be achieved by conducting community detection on this multiplex graph. The multiplex graph is composed of multiple graphs, and all the layers share the same set of vertices. Our proposal involves a model that utilizes a unified representation to compute a flattening of all layers. This unified situation can be penalized to obtain (K) connected components representing the robust electrical segmentation clusters. We compare our robust segmentation to the segmentation based on a single reference situation. The robust segmentation proves its relevance by producing clusters with high intra-electrical perturbation and low variance of electrical perturbation. We saw through the experiences when robust electrical segmentation has a benefit and in which context.Keywords: community detection, electrical segmentation, multiplex graph, power grid
Procedia PDF Downloads 791017 3d Gis Participatory Mapping And Conflict Ladm: Comparative Analysis Of Land Policies And Survey Procedures Applied By The Igorots, Ncip, And Denr To Itogon Ancestral Domain Boundaries
Authors: Deniz A. Apostol, Denyl A. Apostol, Oliver T. Macapinlac, George S. Katigbak
Abstract:
Ang lupa ay buhay at ang buhay ay lupa (land is life and life is land). Based on the 2015 census, the Indigenous Peoples (IPs) population in the Philippines is estimated to be 11.3-20.2 million. They hail from various regions, possess distinct cultures, but encounter shared struggles in territorial disputes. Itogon, the largest Benguet municipality, is home to the Ibaloi, Kankanaey, and other Igorot tribes. Despite having three (3) Ancestral Domains (ADs), Itogon is predominantly labeled as timberland or forest. These overlapping land classifications highlight the presence of inconsistencies in national laws and jurisdictions. This study aims to analyze surveying procedures used by the Igorots, NCIP, and DENR in mapping the Itogon AD Boundaries, show land boundary delineation conflicts, propose surveying guidelines, and recommend 3D Participatory Mapping as geomatics solution for updated AD reference maps. Interpretative Phenomenological Analysis (IPA), Comparative Legal Analysis (CLA), and Map Overlay Analysis (MOA) were utilized to examine the interviews, compare land policies and surveying procedures, and identify differences and overlaps in conflicting land boundaries. In the IPA, master themes identified were AD Definition (rights, responsibilities, restrictions), AD Overlaps (land classifications, political boundaries, ancestral domains, land laws/policies), and Other Conflicts (with other agencies, misinterpretations, suggestions), as considerations for mapping ADs. CLA focused on conflicting surveying procedures: AD Definitions, Surveying Equipment, Surveying Methods, Map Projections, Order of Accuracy, Monuments, Survey Parties, Pre-survey, Survey Proper, and Post-survey procedures. MOA emphasized the land area percentage of conflicting areas, showcasing the impact of misaligned surveying procedures. The findings are summarized through a Land Administration Domain Model (LADM) Conflict, for AD versus AD and Political Boundaries. The products of this study are identification of land conflict factors, survey guidelines recommendations, and contested land area computations. These can serve as references for revising survey manuals, updating AD Sustainable Development and Protection Plans, and making amendments to laws.Keywords: ancestral domain, gis, indigenous people, land policies, participatory mapping, surveying, survey procedures
Procedia PDF Downloads 931016 Analysis of the Strategic Value at the Usage of Green IT Application for the Organizational Product or Service in Order to Gain the Competitive Advantage; Case: E-Money of a Telecommunication Firm in Indonesia
Authors: I Putu Deny Arthawan Sugih Prabowo, Eko Nugroho, Rudy Hartanto
Abstract:
Known, Green IT is a concept about how to use the technology (IT) wisely, efficiently, and environmentally. However, it exists as the consequence of the rapid-growth of the technology (especially IT) currently. Not only for the environments, the usage of Green IT applications, e.g. Cloud Computing (Cloud Storage) and E-Money (E-Cash), also gives its benefits for the organizational business strategy (especially the organizational product/service strategy) in order to gain the organizational competitive advantage (to be the market leader). This paper takes the case at E-Money as a Value-Added Services (VAS) of a telecommunication firm (company) in Indonesia which it also competes with the competitors’ similar product (service). Although it has been a popular telecommunication firm’s product/service, but its strategic values for the organization (firm) is still unknown, and therefore, the aim of this paper is for analyzing its strategic values for gaining the organizational competitive advantage. However, in this paper, its strategic value analysis is viewed by how to assess (consider) its strategic benefits and also manage the challenges or risks of its implementation at the organization as an organizational product/service. Then the paper uses a research model for investigating the influences of both perceived risks and the organizational cultures to the usage of Green IT Application at the organization and also both the usage of Green IT Application at the organization and the threats-challenges of the organizational products/services to the competitive advantage of the organizational products/services. However, the paper uses the quantitative research method (collecting the information from the field respondents by using the research questionnaires) and then, the primary data is analyzed by both descriptive and inferential statistics. Also in this paper, SmartPLS is used for analyzing the primary data by the quantitative research method. Besides using the quantitative research method, the paper also uses the qualitative research method, such as interviewing the field respondent and/or directly field observation, for deeply confirming the quantitative research method’s analysis results at the certain domain, e.g. both organizational cultures and internal processes that support the usage of Green IT applications for the organizational product/service (E-Money in this paper case). However, the paper is still at an infant stage of in-progress research. Then the paper’s results may be used as a reference for the organization (firm or company) in developing the organizational business strategies, especially about the organizational product/service that relates to Green IT applications. Besides it, the paper may also be the future study, e.g. the influence of knowledge transfer about E-Money and/or other Green IT application-based products/services to the organizational service performance that relates to the product (service) in order to gain the competitive advantage.Keywords: Green IT, competitive advantage, strategic value, organization (firm or company), organizational product (service)
Procedia PDF Downloads 3051015 Cluster-Based Exploration of System Readiness Levels: Mathematical Properties of Interfaces
Authors: Justin Fu, Thomas Mazzuchi, Shahram Sarkani
Abstract:
A key factor in technological immaturity in defense weapons acquisition is lack of understanding critical integrations at the subsystem and component level. To address this shortfall, recent research in integration readiness level (IRL) combines with technology readiness level (TRL) to form a system readiness level (SRL). SRL can be enriched with more robust quantitative methods to provide the program manager a useful tool prior to committing to major weapons acquisition programs. This research harnesses previous mathematical models based on graph theory, Petri nets, and tropical algebra and proposes a modification of the desirable SRL mathematical properties such that a tightly integrated (multitude of interfaces) subsystem can display a lower SRL than an inherently less coupled subsystem. The synthesis of these methods informs an improved decision tool for the program manager to commit to expensive technology development. This research ties the separately developed manufacturing readiness level (MRL) into the network representation of the system and addresses shortfalls in previous frameworks, including the lack of integration weighting and the over-importance of a single extremely immature component. Tropical algebra (based on the minimum of a set of TRLs or IRLs) allows one low IRL or TRL value to diminish the SRL of the entire system, which may not be reflective of actuality if that component is not critical or tightly coupled. Integration connections can be weighted according to importance and readiness levels are modified to be a cardinal scale (based on an analytic hierarchy process). Integration arcs’ importance are dependent on the connected nodes and the additional integrations arcs connected to those nodes. Lack of integration is not represented by zero, but by a perfect integration maturity value. Naturally, the importance (or weight) of such an arc would be zero. To further explore the impact of grouping subsystems, a multi-objective genetic algorithm is then used to find various clusters or communities that can be optimized for the most representative subsystem SRL. This novel calculation is then benchmarked through simulation and using past defense acquisition program data, focusing on the newly introduced Middle Tier of Acquisition (rapidly field prototypes). The model remains a relatively simple, accessible tool, but at higher fidelity and validated with past data for the program manager to decide major defense acquisition program milestones.Keywords: readiness, maturity, system, integration
Procedia PDF Downloads 921014 Structural Health Assessment of a Masonry Bridge Using Wireless
Authors: Nalluri Lakshmi Ramu, C. Venkat Nihit, Narayana Kumar, Dillep
Abstract:
Masonry bridges are the iconic heritage transportation infrastructure throughout the world. Continuous increase in traffic loads and speed have kept engineers in dilemma about their structural performance and capacity. Henceforth, research community has an urgent need to propose an effective methodology and validate on real-time bridges. The presented research aims to assess the structural health of an Eighty-year-old masonry railway bridge in India using wireless accelerometer sensors. The bridge consists of 44 spans with length of 24.2 m each and individual pier is 13 m tall laid on well foundation. To calculate the dynamic characteristic properties of the bridge, ambient vibrations were recorded from the moving traffic at various speeds and the same are compared with the developed three-dimensional numerical model using finite element-based software. The conclusions about the weaker or deteriorated piers are drawn from the comparison of frequencies obtained from the experimental tests conducted on alternative spans. Masonry is a heterogeneous anisotropic material made up of incoherent materials (such as bricks, stones, and blocks). It is most likely the earliest largely used construction material. Masonry bridges, which were typically constructed of brick and stone, are still a key feature of the world's highway and railway networks. There are 1,47,523 railway bridges across India and about 15% of these bridges are built by masonry, which are around 80 to 100 year old. The cultural significance of masonry bridges cannot be overstated. These bridges are considered to be complicated due to the presence of arches, spandrel walls, piers, foundations, and soils. Due to traffic loads and vibrations, wind, rain, frost attack, high/low temperature cycles, moisture, earthquakes, river overflows, floods, scour, and soil under their foundations may cause material deterioration, opening of joints and ring separation in arch barrels, cracks in piers, loss of brick-stones and mortar joints, distortion of the arch profile. Few NDT tests like Flat jack Tests are being employed to access the homogeneity, durability of masonry structure, however there are many drawbacks because of the test. A modern approach of structural health assessment of masonry structures by vibration analysis, frequencies and stiffness properties is being explored in this paper.Keywords: masonry bridges, condition assessment, wireless sensors, numerical analysis modal frequencies
Procedia PDF Downloads 1691013 Enhancement of Mass Transport and Separations of Species in a Electroosmotic Flow by Distinct Oscillatory Signals
Authors: Carlos Teodoro, Oscar Bautista
Abstract:
In this work, we analyze theoretically the mass transport in a time-periodic electroosmotic flow through a parallel flat plate microchannel under different periodic functions of the applied external electric field. The microchannel connects two reservoirs having different constant concentrations of an electro-neutral solute, and the zeta potential of the microchannel walls are assumed to be uniform. The governing equations that allow determining the mass transport in the microchannel are given by the Poisson-Boltzmann equation, the modified Navier-Stokes equations, where the Debye-Hückel approximation is considered (the zeta potential is less than 25 mV), and the species conservation. These equations are nondimensionalized and four dimensionless parameters appear which control the mass transport phenomenon. In this sense, these parameters are an angular Reynolds, the Schmidt and the Péclet numbers, and an electrokinetic parameter representing the ratio of the half-height of the microchannel to the Debye length. To solve the mathematical model, first, the electric potential is determined from the Poisson-Boltzmann equation, which allows determining the electric force for various periodic functions of the external electric field expressed as Fourier series. In particular, three different excitation wave forms of the external electric field are assumed, a) sawteeth, b) step, and c) a periodic irregular functions. The periodic electric forces are substituted in the modified Navier-Stokes equations, and the hydrodynamic field is derived for each case of the electric force. From the obtained velocity fields, the species conservation equation is solved and the concentration fields are found. Numerical calculations were done by considering several binary systems where two dilute species are transported in the presence of a carrier. It is observed that there are different angular frequencies of the imposed external electric signal where the total mass transport of each species is the same, independently of the molecular diffusion coefficient. These frequencies are called crossover frequencies and are obtained graphically at the intersection when the total mass transport is plotted against the imposed frequency. The crossover frequencies are different depending on the Schmidt number, the electrokinetic parameter, the angular Reynolds number, and on the type of signal of the external electric field. It is demonstrated that the mass transport through the microchannel is strongly dependent on the modulation frequency of the applied particular alternating electric field. Possible extensions of the analysis to more complicated pulsation profiles are also outlined.Keywords: electroosmotic flow, mass transport, oscillatory flow, species separation
Procedia PDF Downloads 2161012 Linking Soil Spectral Behavior and Moisture Content for Soil Moisture Content Retrieval at Field Scale
Authors: Yonwaba Atyosi, Moses Cho, Abel Ramoelo, Nobuhle Majozi, Cecilia Masemola, Yoliswa Mkhize
Abstract:
Spectroscopy has been widely used to understand the hyperspectral remote sensing of soils. Accurate and efficient measurement of soil moisture is essential for precision agriculture. The aim of this study was to understand the spectral behavior of soil at different soil water content levels and identify the significant spectral bands for soil moisture content retrieval at field-scale. The study consisted of 60 soil samples from a maize farm, divided into four different treatments representing different moisture levels. Spectral signatures were measured for each sample in laboratory under artificial light using an Analytical Spectral Device (ASD) spectrometer, covering a wavelength range from 350 nm to 2500 nm, with a spectral resolution of 1 nm. The results showed that the absorption features at 1450 nm, 1900 nm, and 2200 nm were particularly sensitive to soil moisture content and exhibited strong correlations with the water content levels. Continuum removal was developed in the R programming language to enhance the absorption features of soil moisture and to precisely understand its spectral behavior at different water content levels. Statistical analysis using partial least squares regression (PLSR) models were performed to quantify the correlation between the spectral bands and soil moisture content. This study provides insights into the spectral behavior of soil at different water content levels and identifies the significant spectral bands for soil moisture content retrieval. The findings highlight the potential of spectroscopy for non-destructive and rapid soil moisture measurement, which can be applied to various fields such as precision agriculture, hydrology, and environmental monitoring. However, it is important to note that the spectral behavior of soil can be influenced by various factors such as soil type, texture, and organic matter content, and caution should be taken when applying the results to other soil systems. The results of this study showed a good agreement between measured and predicted values of Soil Moisture Content with high R2 and low root mean square error (RMSE) values. Model validation using independent data was satisfactory for all the studied soil samples. The results has significant implications for developing high-resolution and precise field-scale soil moisture retrieval models. These models can be used to understand the spatial and temporal variation of soil moisture content in agricultural fields, which is essential for managing irrigation and optimizing crop yield.Keywords: soil moisture content retrieval, precision agriculture, continuum removal, remote sensing, machine learning, spectroscopy
Procedia PDF Downloads 991011 Application of Improved Semantic Communication Technology in Remote Sensing Data Transmission
Authors: Tingwei Shu, Dong Zhou, Chengjun Guo
Abstract:
Semantic communication is an emerging form of communication that realize intelligent communication by extracting semantic information of data at the source and transmitting it, and recovering the data at the receiving end. It can effectively solve the problem of data transmission under the situation of large data volume, low SNR and restricted bandwidth. With the development of Deep Learning, semantic communication further matures and is gradually applied in the fields of the Internet of Things, Uumanned Air Vehicle cluster communication, remote sensing scenarios, etc. We propose an improved semantic communication system for the situation where the data volume is huge and the spectrum resources are limited during the transmission of remote sensing images. At the transmitting, we need to extract the semantic information of remote sensing images, but there are some problems. The traditional semantic communication system based on Convolutional Neural Network cannot take into account the global semantic information and local semantic information of the image, which results in less-than-ideal image recovery at the receiving end. Therefore, we adopt the improved vision-Transformer-based structure as the semantic encoder instead of the mainstream one using CNN to extract the image semantic features. In this paper, we first perform pre-processing operations on remote sensing images to improve the resolution of the images in order to obtain images with more semantic information. We use wavelet transform to decompose the image into high-frequency and low-frequency components, perform bilinear interpolation on the high-frequency components and bicubic interpolation on the low-frequency components, and finally perform wavelet inverse transform to obtain the preprocessed image. We adopt the improved Vision-Transformer structure as the semantic coder to extract and transmit the semantic information of remote sensing images. The Vision-Transformer structure can better train the huge data volume and extract better image semantic features, and adopt the multi-layer self-attention mechanism to better capture the correlation between semantic features and reduce redundant features. Secondly, to improve the coding efficiency, we reduce the quadratic complexity of the self-attentive mechanism itself to linear so as to improve the image data processing speed of the model. We conducted experimental simulations on the RSOD dataset and compared the designed system with a semantic communication system based on CNN and image coding methods such as BGP and JPEG to verify that the method can effectively alleviate the problem of excessive data volume and improve the performance of image data communication.Keywords: semantic communication, transformer, wavelet transform, data processing
Procedia PDF Downloads 781010 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses
Authors: Neil Bar, Andrew Heweston
Abstract:
Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.Keywords: probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability
Procedia PDF Downloads 2081009 Implementing a Structured, yet Flexible Tool for Critical Information Handover
Authors: Racheli Magnezi, Inbal Gazit, Michal Rassin, Joseph Barr, Orna Tal
Abstract:
An effective process for transmitting patient critical information is essential for patient safety and for improving communication among healthcare staff. Previous studies have discussed handover tools such as SBAR (Situation, Background, Assessment, Recommendation) or SOFI (Short Observational Framework for Inspection). Yet, these formats lack flexibility, and require special training. In addition, nurses and physicians have different procedures for handing over information. The objectives of this study were to establish a universal, structured tool for handover, for both physicians and nurses, based on parameters that were defined as ‘important’ and ‘appropriate’ by the medical team, and to implement this tool in various hospital departments, with flexibility for each ward. A questionnaire, based on established procedures and on the literature, was developed to assess attitudes towards the most important information for effective handover between shifts (Cronbach's alpha 0.78). It was distributed to 150 senior physicians and nurses in 62 departments. Among senior medical staff, 12 physicians and 66 nurses responded to the questionnaire (52% response rate). Based on the responses, a handover form suitable for all hospital departments was designed and implemented. Important information for all staff included: Patient demographics (full name and age); Health information (diagnosis or patient complaint, changes in hemodynamic status, new medical treatment or equipment required); and Social Information (suspicion of violence, mental or behavioral changes, and guardianship). Additional information relevant to each unit included treatment provided, laboratory or imaging required, and change in scheduled surgery in surgical departments. ICU required information on background illnesses, Pediatrics required information on diet and food provided and Obstetrics required the number of days after cesarean section. Based on the model described, a flexible tool was developed that enables handover of both common and unique information. In addition, it includes general logistic information that must be transmitted to the next shift, such as planned disruptions in service or operations, staff training, etc. Development of a simple, clear, comprehensive, universal, yet flexible tool designed for all medical staff for transmitting critical information between shifts was challenging. Physicians and nurses found it useful and it was widely implemented. Ongoing research is needed to examine the efficiency of this tool, and whether the enthusiasm that accompanied its initial use is maintained.Keywords: handover, nurses, hospital, critical information
Procedia PDF Downloads 2481008 Urban Green Transitioning in The Face of Current Global Change: The Management Role of the Local Government and Residents
Authors: Titilope F. Onaolapo, Christiana A. Breed, Maya Pasgaard, Kristine E. Jensen, Peta Brom
Abstract:
In the face of fast-growing urbanization in most of the world's developing countries, there is a need to understand and address the risk and consequences involved in the indiscriminate use of urban green space. Tshwane city in South Africa has the potential to become one of the world's top biodiversity cities as South Africa is ranked one of the mega countries in biodiversity conservation, and Tshwane metropolitan municipality is the city with the wealthiest biodiversity with grassland biomes. In this study, we focus on the potentials and challenges of urban green transitioning from the Global South perspective with Tshwane city as the case study. We also address the issue of management conflicts that have resulted in informal and illegal activities in and around green spaces, with consequences such as land degradation, loss of livelihoods and biodiversity, and socio-ecological imbalances. A desk study review of eight policy frameworks related to green urban planning and development was done based on four GI principles: multifunctionality, connectivity, interdisciplinary and social inclusion. We interviewed 15 key informants in related departments in the city and administered 200 survey questionnaires among residents. We also had several workshops the other researchers and experts on biodiversity and ecosystem. We found out there is no specific document dedicated to green space management, and where green infrastructure was mentioned, it was focused on as an approach to climate mitigation and adaptation. Also, residents perceive green and open spaces as extra land that could be developed at will. We demonstrated the use of collaborative learning approaches in ecological and development research and the tying research to the existing frameworks, programs, and strategies. Based on this understanding. We outlined the need to incorporate principles of green infrastructure in policy frameworks on spatial planning and environmental development. Furthermore, we develop a model for co-management of green infrastructures by stakeholders, such as residents, developers, policymakers, and decision-makers, to maximize benefits. Our collaborative, interdisciplinary projects pursue SDG multifunctionality of goals 11 and 15 by simultaneously addressing issues around Sustainable Cities and Communities, Climate Action, Life on Land, and Strong Institutions, and halt and reverse land degradation and biodiversity.Keywords: governance, green infrastructure, South Africa, sustainable development, urban planning, Tshwane
Procedia PDF Downloads 1221007 Unveiling Comorbidities in Irritable Bowel Syndrome: A UK BioBank Study utilizing Supervised Machine Learning
Authors: Uswah Ahmad Khan, Muhammad Moazam Fraz, Humayoon Shafique Satti, Qasim Aziz
Abstract:
Approximately 10-14% of the global population experiences a functional disorder known as irritable bowel syndrome (IBS). The disorder is defined by persistent abdominal pain and an irregular bowel pattern. IBS significantly impairs work productivity and disrupts patients' daily lives and activities. Although IBS is widespread, there is still an incomplete understanding of its underlying pathophysiology. This study aims to help characterize the phenotype of IBS patients by differentiating the comorbidities found in IBS patients from those in non-IBS patients using machine learning algorithms. In this study, we extracted samples coding for IBS from the UK BioBank cohort and randomly selected patients without a code for IBS to create a total sample size of 18,000. We selected the codes for comorbidities of these cases from 2 years before and after their IBS diagnosis and compared them to the comorbidities in the non-IBS cohort. Machine learning models, including Decision Trees, Gradient Boosting, Support Vector Machine (SVM), AdaBoost, Logistic Regression, and XGBoost, were employed to assess their accuracy in predicting IBS. The most accurate model was then chosen to identify the features associated with IBS. In our case, we used XGBoost feature importance as a feature selection method. We applied different models to the top 10% of features, which numbered 50. Gradient Boosting, Logistic Regression and XGBoost algorithms yielded a diagnosis of IBS with an optimal accuracy of 71.08%, 71.427%, and 71.53%, respectively. Among the comorbidities most closely associated with IBS included gut diseases (Haemorrhoids, diverticular diseases), atopic conditions(asthma), and psychiatric comorbidities (depressive episodes or disorder, anxiety). This finding emphasizes the need for a comprehensive approach when evaluating the phenotype of IBS, suggesting the possibility of identifying new subsets of IBS rather than relying solely on the conventional classification based on stool type. Additionally, our study demonstrates the potential of machine learning algorithms in predicting the development of IBS based on comorbidities, which may enhance diagnosis and facilitate better management of modifiable risk factors for IBS. Further research is necessary to confirm our findings and establish cause and effect. Alternative feature selection methods and even larger and more diverse datasets may lead to more accurate classification models. Despite these limitations, our findings highlight the effectiveness of Logistic Regression and XGBoost in predicting IBS diagnosis.Keywords: comorbidities, disease association, irritable bowel syndrome (IBS), predictive analytics
Procedia PDF Downloads 1191006 Computational and Experimental Determination of Acoustic Impedance of Internal Combustion Engine Exhaust
Authors: A. O. Glazkov, A. S. Krylova, G. G. Nadareishvili, A. S. Terenchenko, S. I. Yudin
Abstract:
The topic of the presented materials concerns the design of the exhaust system for a certain internal combustion engine. The exhaust system can be divided into two parts. The first is the engine exhaust manifold, turbocharger, and catalytic converters, which are called “hot part.” The second part is the gas exhaust system, which contains elements exclusively for reducing exhaust noise (mufflers, resonators), the accepted designation of which is the "cold part." The design of the exhaust system from the point of view of acoustics, that is, reducing the exhaust noise to a predetermined level, consists of working on the second part. Modern computer technology and software make it possible to design "cold part" with high accuracy in a given frequency range but with the condition of accurately specifying the input parameters, namely, the amplitude spectrum of the input noise and the acoustic impedance of the noise source in the form of an engine with a "hot part". Getting this data is a difficult problem: high temperatures, high exhaust gas velocities (turbulent flows), and high sound pressure levels (non-linearity mode) do not allow the calculated results to be applied with sufficient accuracy. The aim of this work is to obtain the most reliable acoustic output parameters of an engine with a "hot part" based on a complex of computational and experimental studies. The presented methodology includes several parts. The first part is a finite element simulation of the "cold part" of the exhaust system (taking into account the acoustic impedance of radiation of outlet pipe into open space) with the result in the form of the input impedance of "cold part". The second part is a finite element simulation of the "hot part" of the exhaust system (taking into account acoustic characteristics of catalytic units and geometry of turbocharger) with the result in the form of the input impedance of the "hot part". The next third part of the technique consists of the mathematical processing of the results according to the proposed formula for the convergence of the mathematical series of summation of multiple reflections of the acoustic signal "cold part" - "hot part". This is followed by conducting a set of tests on an engine stand with two high-temperature pressure sensors measuring pulsations in the nozzle between "hot part" and "cold part" of the exhaust system and subsequent processing of test results according to a well-known technique in order to separate the "incident" and "reflected" waves. The final stage consists of the mathematical processing of all calculated and experimental data to obtain a result in the form of a spectrum of the amplitude of the engine noise and its acoustic impedance.Keywords: acoustic impedance, engine exhaust system, FEM model, test stand
Procedia PDF Downloads 591005 Defining and Measuring the Success of the Hospitality-Based Social Enterprise Ringelblum Café
Authors: Nitzan Winograd, Nada Kakabadse
Abstract:
This study examines whether the hospitality-based social enterprise Ringelblum Café is achieving its stated social goals of developing a sense of self-efficacy among at-risk youth who work in this enterprise and raising levels of recruitment to the Israel Defence Forces (IDF) and National Service (NS) among these young adults. Ringelblum Café was founded in 2009 in Be'er-Sheva in order to provide employment solutions for at-risk youth in the southern district of Israel. Each year, 10 at-risk young adults aged 16–18 are referred to the programme by various welfare agencies. The training programme is approximately a year in duration and includes professional training in the art of cooking. Each young adult is also supported by a social worker. This study is based on the participation of 31 youths who graduated from the Ringelblum Café’s training programme. A convenience sampling model was used with the assistance of the programme's social worker. This study is quantitative in its approach. Data was collected by means of three separate self-reported questionnaires: a personal information questionnaire collected general demographics data; a self-efficacy questionnaire consisted of two parts: general self-efficacy and social self-efficacy; and an IDS/NS recruitment questionnaire. The study uses the theory of change in order to find out whether at-risk youth in the Ringelblum Café programme are taught a profession with future prospects, as well as whether they develop a sense of self-efficacy and raise their chances of recruitment into the IDF/NS. The study found that the sense of self-efficacy of the graduates is relatively high. In addition, there was a significant difference between the importance of recruitment to the IDF/NS among these youth prior to the beginning of the programme and after its completion, indicating that the training programme had a positive effect on motivation for recruitment to the IDF/NS. The study also found that the percentage of recruits to the IDF/NS among youth who graduated from the training programme were not significantly higher than the general recruitment figures in Israel. In conclusion, Ringelblum Café is making sound progress towards achieving its social goals regarding recruitment to the IDF/NS. Moreover, the sense of self-efficacy among the graduates is relatively high, and it can be assumed that the training programme has a positive effect on these young adults, although there is no clear connection between the two. This study is among a few that have been conducted in the field of hospitality-based social enterprises in Israel and can serve as a basis for further research. Moreover, the study results may help improve the perception of at-risk youth and their contribution to society and could increase awareness of the growing trend of social enterprises promoting social goals.Keywords: at-risk youth, Israel Defence Forces (IDF), national service, recruitment, self-efficacy, social enterprise
Procedia PDF Downloads 2151004 Nonlinear Interaction of Free Surface Sloshing of Gaussian Hump with Its Container
Authors: Mohammad R. Jalali
Abstract:
Movement of liquid with a free surface in a container is known as slosh. For instance, slosh occurs when water in a closed tank is set in motion by a free surface displacement, or when liquid natural gas in a container is vibrated by an external driving force, such as an earthquake or movement induced by transport. Slosh is also derived from resonant switching of a natural basin. During sloshing, different types of motion are produced by energy exchange between the liquid and its container. In present study, a numerical model is developed to simulate the nonlinear even harmonic oscillations of free surface sloshing of an initial disturbance to the free surface of a liquid in a closed square basin. The response of the liquid free surface is affected by amplitude and motion frequencies of its container; therefore, sloshing involves complex fluid-structure interactions. In the present study, nonlinear interaction of free surface sloshing of an initial Gaussian hump with its uneven container is predicted numerically. For this purpose, Green-Naghdi (GN) equations are applied as governing equation of fluid field to produce nonlinear second-order and higher-order wave interactions. These equations reduce the dimensions from three to two, yielding equations that can be solved efficiently. The GN approach assumes a particular flow kinematic structure in the vertical direction for shallow and deep-water problems. The fluid velocity profile is finite sum of coefficients depending on space and time multiplied by a weighting function. It should be noted that in GN theory, the flow is rotational. In this study, GN numerical simulations of initial Gaussian hump are compared with Fourier series semi-analytical solutions of the linearized shallow water equations. The comparison reveals that satisfactory agreement exists between the numerical simulation and the analytical solution of the overall free surface sloshing patterns. The resonant free surface motions driven by an initial Gaussian disturbance are obtained by Fast Fourier Transform (FFT) of the free surface elevation time history components. Numerically predicted velocity vectors and magnitude contours for the free surface patterns indicate that interaction of Gaussian hump with its container has localized effect. The result of this sloshing is applicable to the design of stable liquefied oil containers in tankers and offshore platforms.Keywords: fluid-structure interactions, free surface sloshing, Gaussian hump, Green-Naghdi equations, numerical predictions
Procedia PDF Downloads 3981003 Startup Ecosystem in India: Development and Impact
Authors: Soham Chakraborty
Abstract:
This article examines the development of start-up culture in India, its development as well as related impact on the Indian society. Another vibrant synonym of start-up in the present century can be starting afresh. Startups have become the new flavor of this decade. A startup ecosystem is formed by mainly the new generation in the making. A startup ecosystem involves a variety of elements without which a startup can never prosper, they are—ideas, inventions, innovations as well as authentic research in the field into which one is interested, mentors, advisors, funding bodies, service provider organizations, angel, venture and so on. The culture of startup is quiet nascent but rampant in India. This is largely due to the widespread of media as a medium through which the newfangled entrepreneurs can spread their word of mouth far and wide. Different kinds of media such as Television, Radio, Internet, Print media and so on, act as the weapon to any startup company in India. The article explores how there is a sudden shift in the growing Indian economy due to the rise of startup ecosystem. There are various reasons, which are the result of the growing success of startup in India, firstly, entrepreneurs are building up startup ideas on the basis of various international startup but giving them a pinch of Indian flavor; secondly, business models are framed based on the current problems that people face in the modern century; thirdly, balance between social and technological entrepreneurs and lastly, quality of mentorship. The Government of India boasts startup as a flagship initiative. Bunch full of benefits and assistance was declared in an event named as 'Start Up India, Stand Up India' on 16th January 2016 by the current Prime Minister of India Mr. Narendra Modi. One of the biggest boon that increasing startups are creating in the society is the proliferation of self-employment. Noted Startups which are thriving in India are like OYO, Where’s The Food (WTF), TVF Pitchers, Flipkart and so on are examples of India is getting covered up by various innovative startups. The deep impact can be felt by each Indian after a few years as various governmental and non-governmental policies and agendas are helping in the sprawling up of startups and have mushroom growth in India. The impact of startup uprising in India is also possible due to increasing globalization which is leading to the eradication of national borders, thereby creating the environment to enlarge one’s business model. To conclude, this article points out on the correlation between rising startup in Indian market and its increasing developmental benefits for the people at large. Internationally, various business portals are tagging India to be the world’s fastest growing startup ecosystem.Keywords: business, ecosystem, entrepreneurs, media, globalization, startup
Procedia PDF Downloads 2681002 Assessment of Environmental Risk Factors of Railway Using Integrated ANP-DEMATEL Approach in Fuzzy Conditions
Authors: Mehrdad Abkenari, Mehmet Kunt, Mahdi Nourollahi
Abstract:
Evaluating the environmental risk factors is a combination of analysis of transportation effects. Various definitions for risk can be found in different scientific sources. Each definition depends on a specific and particular perspective or dimension. The effects of potential risks present along the new proposed routes and existing infrastructures of large transportation projects like railways should be studied under comprehensive engineering frameworks. Despite various definitions provided for ‘risk’, all include a uniform concept. Two obvious aspects, loss and unreliability, have always been pointed in all definitions of this term. But, selection as the third aspect is usually implied and means how one notices it. Currently, conducting engineering studies on the environmental effects of railway projects have become obligatory according to the Environmental Assessment Act in developing countries. Considering the longitudinal nature of these projects and probable passage of railways through various ecosystems, scientific research on the environmental risk of these projects have become of great interest. Although many areas of expertise such as road construction in developing countries have not seriously committed to these studies yet, attention to these subjects in establishment or implementation of different systems have become an inseparable part of this wave of research. The present study used environmental risks identified and existing in previous studies and stations to use in next step. The second step proposes a new hybrid approach of analytical network process (ANP) and DEMATEL in fuzzy conditions for assessment of determined risks. Since evaluation of identified risks was not an easy touch, mesh structure was an appropriate approach for analyzing complex systems which were accordingly employed for problem description and modeling. Researchers faced the shortage of real space data and also due to the ambiguity of experts’ opinions and judgments, they were declared in language variables instead of numerical ones. Since fuzzy logic is appropriate for ambiguity and uncertainty, formulation of experts’ opinions in the form of fuzzy numbers seemed an appropriate approach. Fuzzy DEMATEL method was used to extract the relations between major and minor risk factors. Considering the internal relations of risk major factors and its sub-factors in the analysis of fuzzy network, the weight of risk’s main factors and sub-factors were determined. In general, findings of the present study, in which effective railway environmental risk indicators were theoretically identified and rated through the first usage of combined model of DEMATEL and fuzzy network analysis, indicate that environmental risks can be evaluated more accurately and also employed in railway projects.Keywords: DEMATEL, ANP, fuzzy, risk
Procedia PDF Downloads 4131001 Discrete PID and Discrete State Feedback Control of a Brushed DC Motor
Authors: I. Valdez, J. Perdomo, M. Colindres, N. Castro
Abstract:
Today, digital servo systems are extensively used in industrial manufacturing processes, robotic applications, vehicles and other areas. In such control systems, control action is provided by digital controllers with different compensation algorithms, which are designed to meet specific requirements for a given application. Due to the constant search for optimization in industrial processes, it is of interest to design digital controllers that offer ease of realization, improved computational efficiency, affordable return rates, and ease of tuning that ultimately improve the performance of the controlled actuators. There is a vast range of options of compensation algorithms that could be used, although in the industry, most controllers used are based on a PID structure. This research article compares different types of digital compensators implemented in a servo system for DC motor position control. PID compensation is evaluated on its two most common architectures: PID position form (1 DOF), and PID speed form (2 DOF). State feedback algorithms are also evaluated, testing two modern control theory techniques: discrete state observer for non-measurable variables tracking, and a linear quadratic method which allows a compromise between the theoretical optimal control and the realization that most closely matches it. The compared control systems’ performance is evaluated through simulations in the Simulink platform, in which it is attempted to model accurately each of the system’s hardware components. The criteria by which the control systems are compared are reference tracking and disturbance rejection. In this investigation, it is considered that the accurate tracking of the reference signal for a position control system is particularly important because of the frequency and the suddenness in which the control signal could change in position control applications, while disturbance rejection is considered essential because the torque applied to the motor shaft due to sudden load changes can be modeled as a disturbance that must be rejected, ensuring reference tracking. Results show that 2 DOF PID controllers exhibit high performance in terms of the benchmarks mentioned, as long as they are properly tuned. As for controllers based on state feedback, due to the nature and the advantage which state space provides for modelling MIMO, it is expected that such controllers evince ease of tuning for disturbance rejection, assuming that the designer of such controllers is experienced. An in-depth multi-dimensional analysis of preliminary research results indicate that state feedback control method is more satisfactory, but PID control method exhibits easier implementation in most control applications.Keywords: control, DC motor, discrete PID, discrete state feedback
Procedia PDF Downloads 2661000 A Post-Colonial Reading of Maria Edgeworth's Anglo-Irish Novels: Castle Rackrent and the Absentee
Authors: Al. Harshan, Hazamah Ali Mahdi
Abstract:
The Big House literature embodies Irish history. It requires a special dimension of moral and social significance in relation to its owners. The Big House is a metaphor for the decline of the protestant Ascendancy that ruled in a catholic country and oppressed a native people. In the tradition of the Big House fiction, Maria Edgeworth's Castle Rackrent and the Absentee explore the effect of the Anglo-Irish protestant Ascendancy as it governed and misgoverned Ireland. Edgeworth illustrates the tradition of the Big House as a symbol of both a personal and historical theme. This paper provides a reading of Castle Rackrent and The Absentee from a post-colonial perspective. The paper maintains that Edgeworth's novel contain elements of a radical critique of the colonialist enterprise. In our postcolonial reading of Maria Edgeworth's novels, one that goes beyond considering works as those of Sir Walter Scoot, regional evidence has been found of Edgeworth's colonial ideology. The significance of Castle Rackrent lies mainly in the fact that is the first English novel to speak in the voice of the colonized Irish. What is more important is that the irony and the comic aspect of the novel comes from its Irish narrator (Thady Quirk) and its Irish setting Ireland. Edgeworth reveals the geographical 'other' to her English reader, by placing her colonized Irish narrator and his son, Jason Quirk, in a position of inferiority to emphasize the gap between Englishness and Irishness. Furthermore, this satirical aspect is a political one. It works to create and protect the superiority of the domestic English reader over the Irish subject. In other words, the implication of the colonial system of the novel and of its structure of dominance and subordination is overlooked by its comic dimension. The matrimonial plot in the Absentee functions as an imperial plot, constructing Ireland as a complementary but ever unequal partner in the family of Great Britain. This imperial marriage works hegemonically to produce the domestic stability considered so crucial to national and colonial stability. Moreover, in order to achieve her proper imperial plot, Edgeworth reconciliation of England and Ireland is seen in the marriage of the Anglo-Irish (hero/Colambre) with the Irish (heroine/Grace Nugent), and the happy bourgeois family; consequently, it becomes the model for colonizer-colonized relationships. Edgeworth must establish modes of legitimate behavior for women and men. The Absentee explains more purposely how familial reorganization is dependent on the restitution of masculine authority and advantage, particularly for Irish community.Keywords: Maria Edgeworth, post-colonial, reading, Irish
Procedia PDF Downloads 544999 Risk Assessment of Lead Element in Red Peppers Collected from Marketplaces in Antalya, Southern Turkey
Authors: Serpil Kilic, Ihsan Burak Cam, Murat Kilic, Timur Tongur
Abstract:
Interest in the lead (Pb) has considerably increased due to knowledge about the potential toxic effects of this element, recently. Exposure to heavy metals above the acceptable limit affects human health. Indeed, Pb is accumulated through food chains up to toxic concentrations; therefore, it can pose an adverse potential threat to human health. A sensitive and reliable method for determination of Pb element in red pepper were improved in the present study. Samples (33 red pepper products having different brands) were purchased from different markets in Turkey. The selected method validation criteria (linearity, Limit of Detection, Limit of Quantification, recovery, and trueness) demonstrated. Recovery values close to 100% showed adequate precision and accuracy for analysis. According to the results of red pepper analysis, all of the tested lead element in the samples was determined at various concentrations. A Perkin- Elmer ELAN DRC-e model ICP-MS system was used for detection of Pb. Organic red pepper was used to obtain a matrix for all method validation studies. The certified reference material, Fapas chili powder, was digested and analyzed, together with the different sample batches. Three replicates from each sample were digested and analyzed. The results of the exposure levels of the elements were discussed considering the scientific opinions of the European Food Safety Authority (EFSA), which is the European Union’s (EU) risk assessment source associated with food safety. The Target Hazard Quotient (THQ) was described by the United States Environmental Protection Agency (USEPA) for the calculation of potential health risks associated with long-term exposure to chemical pollutants. THQ value contains intake of elements, exposure frequency and duration, body weight and the oral reference dose (RfD). If the THQ value is lower than one, it means that the exposed population is assumed to be safe and 1 < THQ < 5 means that the exposed population is in a level of concern interval. In this study, the THQ of Pb was obtained as < 1. The results of THQ calculations showed that the values were below one for all the tested, meaning the samples did not pose a health risk to the local population. This work was supported by The Scientific Research Projects Coordination Unit of Akdeniz University. Project Number: FBA-2017-2494.Keywords: lead analyses, red pepper, risk assessment, daily exposure
Procedia PDF Downloads 167998 Cognitive Control Moderates the Concurrent Effect of Autistic and Schizotypal Traits on Divergent Thinking
Authors: Julie Ramain, Christine Mohr, Ahmad Abu-Akel
Abstract:
Divergent thinking—a cognitive component of creativity—and particularly the ability to generate unique and novel ideas, has been linked to both autistic and schizotypal traits. However, to our knowledge, the concurrent effect of these trait dimensions on divergent thinking has not been investigated. Moreover, it has been suggested that creativity is associated with different types of attention and cognitive control, and consequently how information is processed in a given context. Intriguingly, consistent with the diametric model, autistic and schizotypal traits have been associated with contrasting attentional and cognitive control styles. Positive schizotypal traits have been associated with reactive cognitive control and attentional flexibility, while autistic traits have been associated with proactive cognitive control and the increased focus of attention. The current study investigated the relationship between divergent thinking, autistic and schizotypal traits and cognitive control in a non-clinical sample of 83 individuals (Males = 42%; Mean age = 22.37, SD = 2.93), sufficient to detect a medium effect size. Divergent thinking was evaluated in an adapted version of-of the Figural Torrance Test of Creative Thinking. Crucially, since we were interested in testing divergent thinking productivity across contexts, participants were asked to generate items from basic shapes in four different contexts. The variance of the proportion of unique to total responses across contexts represented a measure of context adaptability, with lower variance indicating increased context adaptability. Cognitive control was estimated with the Behavioral Proactive Index of the AX-CPT task, with higher scores representing the ability to actively maintain goal-relevant information in a sustained/anticipatory manner. Autistic and schizotypal traits were assessed with the Autism Quotient (AQ) and the Community Assessment of Psychic Experiences (CAPE-42). Generalized linear models revealed a 3-way interaction of autistic and positive schizotypal traits, and proactive cognitive control, associated with increased context adaptability. Specifically, the concurrent effect of autistic and positive schizotypal traits on increased context adaptability was moderated by the level of proactive control and was only significant when proactive cognitive control was high. Our study reveals that autistic and positive schizotypal traits interactively facilitate the capacity to generate unique ideas across various contexts. However, this effect depends on cognitive control mechanisms indicative of the ability to proactively maintain attention when needed. The current results point to a unique profile of divergent thinkers who have the ability to respectively tap both systematic and flexible processing modes within and across contexts. This is particularly intriguing as such combination of phenotypes has been proposed to explain the genius of Beethoven, Nash, and Newton.Keywords: autism, schizotypy, creativity, cognitive control
Procedia PDF Downloads 137997 Godalisation: A Revisionist Conceptual Framework for Singapore’s Artistic Identity
Authors: Bernard Tan
Abstract:
The paper presents a conceptual framework which serves as an art model of Singapore artistic identity. Specifically, the study examines Singapore's artistic identity through the artworks of the country’s significant artists covering the period 1950s to the present. Literature review will discuss the challenges of favouring or choosing one artist over the other. Methodology provides an overview of the perspectives of local artists and surveys Singapore’s artistic histories through qualitative interviews and case studies. Analysis from qualitative data reveals that producing works of accrued visual significance for the country which captures it zeitgeist further strengthens artist’s artistic identity, and consequently, their works remembered by future generations. The paper presents a conceptual framework for Singapore’s artistic identity by categorising it into distinctive categories or Periods: Colonial Period (pre-1965); Nation Building Period (1965-1988); Globalisation Period (1989-2000); Paternal Production Period (2001-2015); and A New Era (2015-present). Godalisation, coined from God and Globalisation – by artist and art collector, Teng Jee Hum – is a direct reference to the godlike influence on Singapore by its founding Father, Mr Lee Kuan Yew, the country’s first Prime Minister who steered the city state “from Third World to First” for close to half a century, from 1965 to his passing in 2015. A detailed schema showing important factors in different art categories: key global geopolitics, key local social-politics, and significant events will be analysed in depth. Main artist groups or artist initiatives which evolved in Singapore during the different Periods from pre-1965 to the present will be categorized and discussed. Taken as a whole, all these periods collectively add up to the Godalisation Era; impacted by the social-political events and historical period of the nation, and captured through the visual representation of the country’s significant artists in their attempt at either visualizing or mythologizing the Singapore Story. The author posits a co-relation between a nation’s economic success and the value or price appreciation of the country’s artist of significance artworks. The paper posed a rhetorical question: “Which Singapore’s artist will historian of the future – and by extension, the people of the country from future generations – remember? Who will remain popular? Whilst which artists will be forgotten.” The searching question: “Who will survive, be remembered in the annals of history and, above all, how to ensure the survival of one’s nation artistic identity? The art that last will probably be determined by the future, in the future, where art historians pontificate from a later vantage point.Keywords: artistic identity, art collection, godalisation, singapore
Procedia PDF Downloads 38996 Additive Manufacturing of Microstructured Optical Waveguides Using Two-Photon Polymerization
Authors: Leonnel Mhuka
Abstract:
Background: The field of photonics has witnessed substantial growth, with an increasing demand for miniaturized and high-performance optical components. Microstructured optical waveguides have gained significant attention due to their ability to confine and manipulate light at the subwavelength scale. Conventional fabrication methods, however, face limitations in achieving intricate and customizable waveguide structures. Two-photon polymerization (TPP) emerges as a promising additive manufacturing technique, enabling the fabrication of complex 3D microstructures with submicron resolution. Objectives: This experiment aimed to utilize two-photon polymerization to fabricate microstructured optical waveguides with precise control over geometry and dimensions. The objective was to demonstrate the feasibility of TPP as an additive manufacturing method for producing functional waveguide devices with enhanced performance. Methods: A femtosecond laser system operating at a wavelength of 800 nm was employed for two-photon polymerization. A custom-designed CAD model of the microstructured waveguide was converted into G-code, which guided the laser focus through a photosensitive polymer material. The waveguide structures were fabricated using a layer-by-layer approach, with each layer formed by localized polymerization induced by non-linear absorption of the laser light. Characterization of the fabricated waveguides included optical microscopy, scanning electron microscopy, and optical transmission measurements. The optical properties, such as mode confinement and propagation losses, were evaluated to assess the performance of the additive manufactured waveguides. Conclusion: The experiment successfully demonstrated the additive manufacturing of microstructured optical waveguides using two-photon polymerization. Optical microscopy and scanning electron microscopy revealed the intricate 3D structures with submicron resolution. The measured optical transmission indicated efficient light propagation through the fabricated waveguides. The waveguides exhibited well-defined mode confinement and relatively low propagation losses, showcasing the potential of TPP-based additive manufacturing for photonics applications. The experiment highlighted the advantages of TPP in achieving high-resolution, customized, and functional microstructured optical waveguides. Conclusion: his experiment substantiates the viability of two-photon polymerization as an innovative additive manufacturing technique for producing complex microstructured optical waveguides. The successful fabrication and characterization of these waveguides open doors to further advancements in the field of photonics, enabling the development of high-performance integrated optical devices for various applicationsKeywords: Additive Manufacturing, Microstructured Optical Waveguides, Two-Photon Polymerization, Photonics Applications
Procedia PDF Downloads 100995 Determining the Thermal Performance and Comfort Indices of a Naturally Ventilated Room with Reduced Density Reinforced Concrete Wall Construction over Conventional M-25 Grade Concrete
Authors: P. Crosby, Shiva Krishna Pavuluri, S. Rajkumar
Abstract:
Purpose: Occupied built-up space can be broadly classified as air-conditioned and naturally ventilated. Regardless of the building type, the objective of all occupied built-up space is to provide a thermally acceptable environment for human occupancy. Considering this aspect, air-conditioned spaces allow a greater degree of flexibility to control and modulate the comfort parameters during the operation phase. However, in the case of naturally ventilated space, a number of design features favoring indoor thermal comfort should be mandatorily conceptualized starting from the design phase. One such primary design feature that requires to be prioritized is, selection of building envelope material, as it decides the flow of energy from outside environment to occupied spaces. Research Methodology: In India and many countries across globe, the standardized material used for building envelope is re-enforced concrete (i.e. M-25 grade concrete). The comfort inside the RC built environment for warm & humid climate (i.e. mid-day temp of 30-35˚C, diurnal variation of 5-8˚C & RH of 70-90%) is unsatisfying to say the least. This study is mainly focused on reviewing the impact of mix design of conventional M25 grade concrete on inside thermal comfort. In this mix design, air entrainment in the range of 2000 to 2100 kg/m3 is introduced to reduce the density of M-25 grade concrete. Thermal performance parameters & indoor comfort indices are analyzed for the proposed mix and compared in relation to the conventional M-25 grade. There are diverse methodologies which govern indoor comfort calculation. In this study, three varied approaches specifically a) Indian Adaptive Thermal comfort model, b) Tropical Summer Index (TSI) c) Air temperature less than 33˚C & RH less than 70% to calculate comfort is adopted. The data required for the thermal comfort study is acquired by field measurement approach (i.e. for the new mix design) and simulation approach by using design builder (i.e. for the conventional concrete grade). Findings: The analysis points that the Tropical Summer Index has a higher degree of stringency in determining the occupant comfort band whereas also providing a leverage in thermally tolerable band over & above other methodologies in the context of the study. Another important finding is the new mix design ensures a 10% reduction in indoor air temperature (IAT) over the outdoor dry bulb temperature (ODBT) during the day. This translates to a significant temperature difference of 6 ˚C IAT and ODBT.Keywords: Indian adaptive thermal comfort, indoor air temperature, thermal comfort, tropical summer index
Procedia PDF Downloads 320994 Rapid Soil Classification Using Computer Vision with Electrical Resistivity and Soil Strength
Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, P. L. Goh, Grace H. B. Foo, M. L. Leong
Abstract:
This paper presents the evaluation of various soil testing methods such as the four-probe soil electrical resistivity method and cone penetration test (CPT) that can complement a newly developed novel rapid soil classification scheme using computer vision, to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from the local construction industry are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labor-intensive. Thus, a rapid classification method is needed at the SGs. Four-probe soil electrical resistivity and CPT were evaluated for their feasibility as suitable additions to the computer vision system to further develop this innovative non-destructive and instantaneous classification method. The computer vision technique comprises soil image acquisition using an industrial-grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the following three items were targeted to be added onto the computer vision scheme: the apparent electrical resistivity of soil (ρ) measured using a set of four probes arranged in Wenner’s array, the soil strength measured using a modified mini cone penetrometer, and w measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay,” and a mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay” and are feasible as complementing methods to the computer vision system.Keywords: computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification
Procedia PDF Downloads 239993 Followership Styles in the U.S. Hospitality Workforce: A Multi-Generational Comparison Study
Authors: Yinghua Huang, Tsu-Hong Yen
Abstract:
The latest advance in leadership research has revealed that leadership is co-created through the combined action of leading and following. The role of followers is as important as leaders in the leadership process. However, the previous leadership studies often conceptualize leadership as a leader-centric process, while the role of followers is largely neglected in the literature. Until recently, followership studies receives more attention because the character and behavior of followers are as vital as the leader during the leadership process. Yet, there is a dearth of followership research in the context of tourism and hospitality industries. Therefore, this study seeks to fill in the gap of knowledge and investigate the followership styles in the U.S. hospitality workforce. In particular, the objectives of this study are to identify popular followership practices among hospitality employees and evaluate hospitality employees' followership styles using Kelley’s followership typology framework. This study also compared the generational differences in followership styles among hospitality employees. According to the U.S. Bureau of Labor Statistics, the workforce in the lodging and foodservice sectors consists of around 12% baby boomers, 29% Gen Xs, 23% Gen Ys, and 36% Gen Zs in 2019. The diversity of workforce demographics in the U.S. hospitality industry calls for more attention to understand the generational differences in followership styles and organizational performance. This study conducted an in-depth interview and a questionnaire survey to collect both qualitative and quantitative data. A snowball sampling method was used to recruit participants working in the hospitality industry in the San Francisco Bay Area, California, USA. A total of 120 hospitality employees participated in this study, including 22 baby boomers, 32 Gen Xs, 30 Gen Ys, and 36 Gen Zs. 45% of the participants were males, and 55% were female. The findings of this study identified good followership practices across the multi-generational participants. For example, a Gen Y participant said that 'followership involves learning and molding oneself after another person usually an expert in an area of interest. I think of followership as personal and professional development. I learn and get better by hands-on training and experience'. A Gen X participant said that 'I can excel by not being fearful of taking on unfamiliar tasks and accepting challenges.' Furthermore, this study identified five typologies of Kelley’s followership model among the participants: 45% exemplary followers, 13% pragmatist followers, 2% alienated followers, 18% passive followers, and 23% conformist followers. The generational differences in followership styles were also identified. The findings of this study contribute to the hospitality human resource literature by identifying the multi-generational perspectives of followership styles among hospitality employees. The findings provide valuable insights for hospitality leaders to understand their followers better. Hospitality leaders were suggested to adjust their leadership style and communication strategies based on employees' different followership styles.Keywords: followership, hospitality workforce, generational diversity, Kelley’s followership topology
Procedia PDF Downloads 129992 Adaptative Metabolism of Lactic Acid Bacteria during Brewers' Spent Grain Fermentation
Authors: M. Acin-Albiac, P. Filannino, R. Coda, Carlo G. Rizzello, M. Gobbetti, R. Di Cagno
Abstract:
Demand for smart management of large amounts of agro-food by-products has become an area of major environmental and economic importance worldwide. Brewers' spent grain (BSG), the most abundant by-product generated in the beer-brewing process, represents an example of valuable raw material and source of health-promoting compounds. To the date, the valorization of BSG as a food ingredient has been limited due to poor technological and sensory properties. Tailored bioprocessing through lactic acid bacteria (LAB) fermentation is a versatile and sustainable means for the exploitation of food industry by-products. Indigestible carbohydrates (e.g., hemicelluloses and celluloses), high phenolic content, and mostly lignin make of BSG a hostile environment for microbial survival. Hence, the selection of tailored starters is required for successful fermentation. Our study investigated the metabolic strategies of Leuconostoc pseudomesenteroides and Lactobacillus plantarum strains to exploit BSG as a food ingredient. Two distinctive BSG samples from different breweries (Italian IT- and Finish FL-BSG) were microbially and chemically characterized. Growth kinetics, organic acid profiles, and the evolution of phenolic profiles during the fermentation in two BSG model media were determined. The results were further complemented with gene expression targeting genes involved in the degradation cellulose, hemicelluloses building blocks, and the metabolism of anti-nutritional factors. Overall, the results were LAB genus dependent showing distinctive metabolic capabilities. Leuc. pseudomesenteroides DSM 20193 may degrade BSG xylans while sucrose metabolism could be furtherly exploited for extracellular polymeric substances (EPS) production to enhance BSG pro-technological properties. Although L. plantarum strains may follow the same metabolic strategies during BSG fermentation, the mode of action to pursue such strategies was strain-dependent. L. plantarum PU1 showed a great preference for β-galactans compared to strain WCFS1, while the preference for arabinose occurred at different metabolic phases. Phenolic compounds profiling highlighted a novel metabolic route for lignin metabolism. These findings will allow an improvement of understanding of how lactic acid bacteria transform BSG into economically valuable food ingredients.Keywords: brewery by-product valorization, metabolism of plant phenolics, metabolism of lactic acid bacteria, gene expression
Procedia PDF Downloads 129991 Robust Processing of Antenna Array Signals under Local Scattering Environments
Authors: Ju-Hong Lee, Ching-Wei Liao
Abstract:
An adaptive array beamformer is designed for automatically preserving the desired signals while cancelling interference and noise. Providing robustness against model mismatches and tracking possible environment changes calls for robust adaptive beamforming techniques. The design criterion yields the well-known generalized sidelobe canceller (GSC) beamformer. In practice, the knowledge of the desired steering vector can be imprecise, which often occurs due to estimation errors in the DOA of the desired signal or imperfect array calibration. In these situations, the SOI is considered as interference, and the performance of the GSC beamformer is known to degrade. This undesired behavior results in a reduction of the array output signal-to-interference plus-noise-ratio (SINR). Therefore, it is worth developing robust techniques to deal with the problem due to local scattering environments. As to the implementation of adaptive beamforming, the required computational complexity is enormous when the array beamformer is equipped with massive antenna array sensors. To alleviate this difficulty, a generalized sidelobe canceller (GSC) with partially adaptivity for less adaptive degrees of freedom and faster adaptive response has been proposed in the literature. Unfortunately, it has been shown that the conventional GSC-based adaptive beamformers are usually very sensitive to the mismatch problems due to local scattering situations. In this paper, we present an effective GSC-based beamformer against the mismatch problems mentioned above. The proposed GSC-based array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. We utilize the predefined steering vector and a presumed angle tolerance range to carry out the required estimation for obtaining an appropriate steering vector. A matrix associated with the direction vector of signal sources is first created. Then projection matrices related to the matrix are generated and are utilized to iteratively estimate the actual direction vector of the desired signal. As a result, the quiescent weight vector and the required signal blocking matrix required for performing adaptive beamforming can be easily found. By utilizing the proposed GSC-based beamformer, we find that the performance degradation due to the considered local scattering environments can be effectively mitigated. To further enhance the beamforming performance, a signal subspace projection matrix is also introduced into the proposed GSC-based beamformer. Several computer simulation examples show that the proposed GSC-based beamformer outperforms the existing robust techniques.Keywords: adaptive antenna beamforming, local scattering, signal blocking, steering mismatch
Procedia PDF Downloads 112990 The Efficacy of Preoperative Thermal Pulsation Treatment in Reducing Post Cataract Surgery Dry Eye Disease: A Systematic Review and Meta-analysis
Authors: Lugean K. Alomari, Rahaf K. Sharif, Basil K. Alomari, Hind M. Aljabri, Faisal F. Aljahdali, Amal A. Alomari, Saeed A. Alghamdi
Abstract:
Background: The thermal pulsation system is a therapy that uses heat and massage to treat dry eye disease; thus, some trials have been published to compare it with the conventional treatment. The aim of this study is to conduct a systematic review and meta-analysis comparing the efficacy of thermal pulsation systems with conventional treatment in patients undergoing cataract surgery. Methods: Medline, Embase, and Cochrane Central Register of Controlled Trials (CENTRAL) databases were searched for eligible trials. We included three randomized controlled trials (RCTs) that compared the thermal pulsation system with the conventional treatment in patients undergoing cataract surgery. A table of characteristics was plotted, and the Quality of the studies was assessed using the Cochrane risk-of-bias tool for randomized trials (RoB 2). Forest plots were plotted using the Random-effect Inverse Variance method. χ2 test and the Higgins-I-squared (I2) model were used to assess heterogeneity. A total of 201 cataract surgery patients were included, with 105 undergoing preoperative pulsation therapy and 96 receiving conventional treatment. Demographic analysis revealed comparable distributions across groups. Results: All the studies in our analysis are of good quality with a low risk of bias. A total of 201 patients were included in the analysis, out of which 105 underwent pulsation therapy, and 95 were in the control group. Tear Break-up Time (TBUT) analysis revealed no significant baseline differences, except pulsation therapy being better at 1 month. (SMD 0.42 [95%CI 0.14 - 0.70] p=0.004). This positive trend continued at three months (SMD 0.52 [95% CI (0.20 – 0.84)] p=0.002). Corneal fluorescein staining scores and Meibomian gland-yielding secretion scores showed no significant differences at baseline. However, at one month, pulsation therapy significantly improved Meibomian gland function (SMD -0.86 [95% CI (-1.20 - -0.53)] p<0.00001), indicating a reduced risk of dry eye syndrome. Conclusion: Preoperative pulsation therapy appears to enhance post-cataract surgery outcomes, particularly in terms of tear film stability and Meibomian gland secretory function. The sustained positive effects observed at one and three months post-surgery suggest the potential for long-term benefits.Keywords: lipiflow, cataract, thermal pulsation, dry eye
Procedia PDF Downloads 20