Search results for: predictive insights
882 Virtual Metering and Prediction of Heating, Ventilation, and Air Conditioning Systems Energy Consumption by Using Artificial Intelligence
Authors: Pooria Norouzi, Nicholas Tsang, Adam van der Goes, Joseph Yu, Douglas Zheng, Sirine Maleej
Abstract:
In this study, virtual meters will be designed and used for energy balance measurements of an air handling unit (AHU). The method aims to replace traditional physical sensors in heating, ventilation, and air conditioning (HVAC) systems with simulated virtual meters. Due to the inability to manage and monitor these systems, many HVAC systems have a high level of inefficiency and energy wastage. Virtual meters are implemented and applied in an actual HVAC system, and the result confirms the practicality of mathematical sensors for alternative energy measurement. While most residential buildings and offices are commonly not equipped with advanced sensors, adding, exploiting, and monitoring sensors and measurement devices in the existing systems can cost thousands of dollars. The first purpose of this study is to provide an energy consumption rate based on available sensors and without any physical energy meters. It proves the performance of virtual meters in HVAC systems as reliable measurement devices. To demonstrate this concept, mathematical models are created for AHU-07, located in building NE01 of the British Columbia Institute of Technology (BCIT) Burnaby campus. The models will be created and integrated with the system’s historical data and physical spot measurements. The actual measurements will be investigated to prove the models' accuracy. Based on preliminary analysis, the resulting mathematical models are successful in plotting energy consumption patterns, and it is concluded confidently that the results of the virtual meter will be close to the results that physical meters could achieve. In the second part of this study, the use of virtual meters is further assisted by artificial intelligence (AI) in the HVAC systems of building to improve energy management and efficiency. By the data mining approach, virtual meters’ data is recorded as historical data, and HVAC system energy consumption prediction is also implemented in order to harness great energy savings and manage the demand and supply chain effectively. Energy prediction can lead to energy-saving strategies and considerations that can open a window in predictive control in order to reach lower energy consumption. To solve these challenges, the energy prediction could optimize the HVAC system and automates energy consumption to capture savings. This study also investigates AI solutions possibility for autonomous HVAC efficiency that will allow quick and efficient response to energy consumption and cost spikes in the energy market.Keywords: virtual meters, HVAC, artificial intelligence, energy consumption prediction
Procedia PDF Downloads 105881 New Insights Into Gluten-Free Bread Staling Treatment
Authors: Sayed Mostafa, Siham Mostafa Mohamed Faheid, Ibrahim Rizk Sayed Ahmed, Yasser Fehry Mohamed Kishk, Gamal Hassan Ragab
Abstract:
Gluten-free foods are still the only treatment for gluten-allergic patients. Consequently, this study is concerned with improving the quality attributes of gluten-free bread using different concentrations (0, 20, 40, 60 and 80ppm) of all maltogenic α-amylase (MA) and xylanase (XY) compared with wheat flour Balady bread and untreated gluten-free Balady bread (GFBB). Pasting properties, falling number, water activity, alkaline water retention capacity (AWRC) and sensory properties (fresh bread, after 24h, after 48h and after 72h) of gluten-free bread were evaluated. Additionally, the effect of merging different concentrations of maltogenic α-amylase and xylanase on stalling behavior (AWRC) and sensory properties of gluten-free Balady bread was investigated. The addition of MA led to a gradually decreased peak viscosity, breakdown, setback and pasting temperature of GFBB with the increasing level of MA. Maltogenic α-amylase and xylanase addition led to a reduction in the FN values compared to the untreated gluten-free sample, noting that the MA-treated samples showed a significant decrease compared to the XY-treated and untreated samples. Wheat flour Balady bread significantly showed a higher value of AWRC compared to untreated gluten-free Balady bread at different storage periods (zero time, after 24h, after 48h and after 72h). MA-treated samples showed higher water binding capacity and water activity (aw)in comparison with XY-treated samples, with significance during all storage periods. Concerning the overall acceptability during the third day, the highest score (4.6) was observed by the GFBB sample containing 40ppm MA, followed by 4.3, which was investigated by the GFBB sample containing 80ppm XY with no significance between them and with significance compared to the other samples.Keywords: celiac disease, gluten-free products, anti-stalling agents, maltogenic α-amylase, xylanase
Procedia PDF Downloads 85880 Foucault and the Archaeology of Transhumanism
Authors: Michel Foucault, Friedrich Nietzsche, Max More, Natasha Vita-More, Francesca Ferrando
Abstract:
During the early part of his intellectual and academic career (1950s and 1960s), Michel Foucault developed an interest for what we can call the ‘anthropological question’, or how our modernity deals with human nature from an epistemological standpoint. The great originality of Foucault’s thought here lies in the fact that he approaches this question not from the perspective of this ‘sovereign subject’ (that has characterized the History of Western thought) he wishes to disclose and ‘denounce’, but rather at the level of discourses and the way they constitute who we are, so to speak. This led him, in turn, to formulate a series of though-provoking statements during his so-called ‘archaeological period’ of the 1960s concerning what we call ‘man’ in the West, such as that he is an ‘invention of recent date’ (as a proper object of concern and reflection), and, perhaps more importantly, that he might disappear in the near future, ‘like a face drawn in sand at the edge of the sea’. Foucault is following on the footsteps of Nietzsche in that regard, who had famously announced in the 19th ce. the ‘death of God’ and the need for the future generations to surpass (so to speak) the traditional ‘Christian-centred’ Western conception of the human. While Foucault exposed such insights more than half a century ago, they appear to be more actual than ever today with the development and rise in popularity of intellectual movements such as Transhumanism and Posthumanism, which seek to question and propose an alternative to the concepts of ‘man’ or ‘human nature’ in our culture. They rely for that on the same assumption as Foucault and Nietzsche that those concepts (and the meaning we attribute to them) have become ‘obsolete’ as it is and thus must be overcome (at a conceptual, but also a more practical level). Hence, those movements not only echo the important Foucauldian reflection of the 1950s and 1960s on the ‘anthropological question’ but seem to have been literally announced by it, so to speak. The aim of this paper will therefore be to show the relevance of Foucault (and in particular his archaeological method) in understanding the nature of Transhumanism (and Posthumanism), for instance, by analysing and assessing it as a form of discourse that is literally reshaping the way we understand ourselves as human beings in our (post)modern age, drawing for that on a number of key texts including from the early productions of Foucault.Keywords: foucault, nietzsche, archaeology, transhumanism, posthumanism
Procedia PDF Downloads 71879 Exploration of RFID in Healthcare: A Data Mining Approach
Authors: Shilpa Balan
Abstract:
Radio Frequency Identification, also popularly known as RFID is used to automatically identify and track tags attached to items. This study focuses on the application of RFID in healthcare. The adoption of RFID in healthcare is a crucial technology to patient safety and inventory management. Data from RFID tags are used to identify the locations of patients and inventory in real time. Medical errors are thought to be a prominent cause of loss of life and injury. The major advantage of RFID application in healthcare industry is the reduction of medical errors. The healthcare industry has generated huge amounts of data. By discovering patterns and trends within the data, big data analytics can help improve patient care and lower healthcare costs. The number of increasing research publications leading to innovations in RFID applications shows the importance of this technology. This study explores the current state of research of RFID in healthcare using a text mining approach. No study has been performed yet on examining the current state of RFID research in healthcare using a data mining approach. In this study, related articles were collected on RFID from healthcare journal and news articles. Articles collected were from the year 2000 to 2015. Significant keywords on the topic of focus are identified and analyzed using open source data analytics software such as Rapid Miner. These analytical tools help extract pertinent information from massive volumes of data. It is seen that the main benefits of adopting RFID technology in healthcare include tracking medicines and equipment, upholding patient safety, and security improvement. The real-time tracking features of RFID allows for enhanced supply chain management. By productively using big data, healthcare organizations can gain significant benefits. Big data analytics in healthcare enables improved decisions by extracting insights from large volumes of data.Keywords: RFID, data mining, data analysis, healthcare
Procedia PDF Downloads 234878 Smart Water Cities for a Sustainable Future: Defining, Necessity, and Policy Pathways for Canada's Urban Water Resilience
Authors: Sima Saadi, Carolyn Johns
Abstract:
The concept of a "Smart Water City" is emerging as a framework to address critical urban water challenges, integrating technology, data, and sustainable management practices to enhance water quality, conservation, and accessibility. This paper explores the definition of a Smart Water City, examines the pressing need for such cities in Canada, and proposes policy pathways for their development. Smart Water Cities utilize advanced monitoring systems, data analytics, and integrated water resources management to optimize water usage, anticipate and mitigate environmental impacts, and engage citizens in sustainable practices. Global examples from regions such as Europe, Asia, and Australia illustrate how Smart Water City models can transform urban water systems by enhancing resilience, improving resource efficiency, and driving economic development through job creation in environmental technology sectors. For Canada, adopting Smart Water City principles could address pressing challenges, including climate-induced water stress, aging infrastructure, and the need for equitable water access across diverse urban and rural communities. Building on Canada's existing water policies and technological expertise, it propose strategic investments in digital water infrastructure, data-driven governance, and community partnerships. Through case studies, this paper offers insights into how Canadian cities could benefit from cross-sector collaboration, policy development, and funding for smart water technology. By aligning national policy with smart urban water solutions, Canada has the potential to lead globally in sustainable water management, ensuring long-term water security and environmental stewardship for its cities and communities.Keywords: smart water city, urban water resilience, water management technology, sustainable water infrastructure, canada water policy, smart city initiatives
Procedia PDF Downloads 9877 The Revitalization of South-south Cooperation: Evaluation of South African Direct Investment in Cameroon
Authors: Albert Herve Nkolo Mpoko
Abstract:
The Foreign Direct Investment (FDI) landscape in Cameroon has garnered significant attention from both European and Asian nations due to perceived benefits such as capital infusion, technology transfer, and potential for economic expansion. However, it is noteworthy that South Africa's investment presence remains comparatively subdued in Cameroon, lagging behind that of Europe and Asia. Equally surprising is the limited footprint of Africa's economic powerhouse within other African economies. This study delved into four specific facets of South African investment in Cameroon. Initially, it focused on identifying South African companies operating within Cameroon. Subsequently, the analysis encompassed assessing the correlation between South African investment and poverty alleviation. Additionally, the study examined the nexus between South African investment and technological advancement, and underscored the significance of investment incentives in both countries Key findings of the research shed light on several crucial points. South Africa ought to reassess its economic engagement with Francophone Africa, particularly Cameroon. Despite existing policies aimed at fostering investment, there remains substantial ground to cover in this realm. The proliferation of South African enterprises in Cameroon holds the potential to ameliorate poverty and foster employment opportunities across both nations. The advent of South African firms in Cameroon can catalyse technological advancements within the region. Data collection involved surveying 100 executives from the respective administrations and conducting ten interviews. The gathered data underwent triangulation, wherein quantitative findings were juxtaposed with qualitative insights. In conclusion, the study underscores the underutilization of Cameroon by South Africa, emphasizing the untapped potential for mutual economic growth. Furthermore, it posits that the success of South Africa's multinational corporations abroad could serve as a pivotal pillar for sustaining its domestic economy.Keywords: FDI, transfer of technology, South-South cooperation, mutual economic growth
Procedia PDF Downloads 46876 Exploring the Healthcare Leader's Perception of Their Role and Leadership Behaviours - Looking Through an Adult Developmental Lens
Authors: Shannon Richards-Green, Suzanne Gough, Sharon Mickan
Abstract:
Background: Healthcare leaders work in highly complex and rapidly changing environments. Consequently, they need both flexibility and the capacity to hold multiple perspectives simultaneously. My research explored how healthcare leaders understand and make sense (meaning) of their leadership experiences and how this understanding was manifested in their leadership behaviours. Methods: This grounded theory study was conducted via 2 x 1-hour interviews with healthcare leaders within acute care hospitals. A total of 33 hours of interviews were conducted with 17 participants. Participants were recruited using a combination of purposive and snowball sampling. Interviews were recorded, transcribed, and coded to explore emergent patterns and relationships within the data, utilising constant comparative analysis. Adult developmental stage was defined through a subject-object interview with each participant, in alignment with the tenets of constructive development theory. Findings: Participants from acute care hospitals within Australia have participated in the study, with the majority representing the executive leadership level. Broad categories emerging from the data include; Broadening perspectives and abilities as a leader, Dealing with and experiencing conflict within the workplace, Experiencing rewarding times as a leader, and Leading in alignment with a strong personal values system. Discussion: Successfully dealing with complex challenges requires an ability to engage with nuanced perspectives and responses, an integral part of adult developmental growth. In dealing with conflict, for example, leaders at various levels of adult development approached the situation quite differently. Understanding how healthcare leaders make sense of their experiences can assist in providing insights into the value of supporting adult developmental growth in healthcare leadership.Keywords: leadership, adult development, complexity, growth
Procedia PDF Downloads 80875 Challenges Faced by Physician Leaders in Teaching Hospitals of Private Medical Schools in the National Capital Region, Philippines
Authors: Policarpio Jr. Joves
Abstract:
Physicians in most teaching hospitals are commonly promoted into managerial roles, yet their training is mostly in clinical and scientific skills but not in leadership competencies. When they shift into roles of physician leadership, the majority hold on to their primary identity of physicians. These conflicting roles affect their identity and eventually their work. The physician leaders also face additional challenges related to academics which include incorporation of new knowledge into the existing curriculum, use of technology in the delivery of teaching, the need to train medical students outside of hospital wards, etc. The study aims to explore how physician leaders in teaching hospitals of private medical schools enact their leadership roles and how they face the challenges as physician leaders. The study setting shall be teaching hospitals of three private medical schools situated in the National Capital Region, Philippines. A multiple case study design shall be adopted in this research. Physicians shall be eligible to participate in the study if they are practicing clinicians limited to the five major clinical specialty: Internal Medicine, Pediatrics, Family Medicine, Surgery, Obstetrics and Gynecology. They must be teaching in the College of Medicine prior to their appointments as physician leaders in both medical school and teaching hospital. Semi-structured face-to-face interviews shall be utilized as a means of data collection, with open-ended questions, enabling physician leaders to present narratives about their identity, role enactment, conflicts, reaction of colleagues, and the challenges encountered in their day-to-day work as physician leaders. Interviews shall be combined with observations and review of records to gain more insights into how the physician leaders are 'doing' management. Within-case analysis shall be done initially followed by a thematic analysis across the cases, referred to as cross–case analysis or cross-case synthesis.Keywords: academic leaders, academic managers, physician leaders, physician managers
Procedia PDF Downloads 346874 Argumentation Frameworks and Theories of Judging
Authors: Sonia Anand Knowlton
Abstract:
With the rise of artificial intelligence, computer science is becoming increasingly integrated in virtually every area of life. Of course, the law is no exception. Through argumentation frameworks (AFs), computer scientists have used abstract algebra to structure the legal reasoning process in a way that allows conclusions to be drawn from a formalized system of arguments. In AFs, arguments compete against each other for logical success and are related to one another through the binary operation of the attack. The prevailing arguments make up the preferred extension of the given argumentation framework, telling us what set of arguments must be accepted from a logical standpoint. There have been several developments of AFs since its original conception in the early 90’s in efforts to make them more aligned with the human reasoning process. Generally, these developments have sought to add nuance to the factors that influence the logical success of competing arguments (e.g., giving an argument more logical strength based on the underlying value it promotes). The most cogent development was that of the Extended Argumentation Framework (EAF), in which attacks can themselves be attacked by other arguments, and the promotion of different competing values can be formalized within the system. This article applies the logical structure of EAFs to current theoretical understandings of judicial reasoning to contribute to theories of judging and to the evolution of AFs simultaneously. The argument is that the main limitation of EAFs, when applied to judicial reasoning, is that they require judges to themselves assign values to different arguments and then lexically order these values to determine the given framework’s preferred extension. Drawing on John Rawls’ Theory of Justice, the examination that follows is whether values are lexical and commensurable to this extent. The analysis that follows then suggests a potential extension of the EAF system with an approach that formalizes different “planes of attack” for competing arguments that promote lexically ordered values. This article concludes with a summary of how these insights contribute to theories of judging and of legal reasoning more broadly, specifically in indeterminate cases where judges must turn to value-based approaches.Keywords: computer science, mathematics, law, legal theory, judging
Procedia PDF Downloads 60873 Bilingual Experience Influences Different Components of Cognitive Control: Evidence from fMRI Study
Authors: Xun Sun, Le Li, Ce Mo, Lei Mo, Ruiming Wang, Guosheng Ding
Abstract:
Cognitive control plays a central role in information processing, which is comprised of various components including response suppression and inhibitory control. Response suppression is considered to inhibit the irrelevant response during the cognitive process; while inhibitory control to inhibit the irrelevant stimulus in the process of cognition. Both of them undertake distinct functions for the cognitive control, so as to enhance the performances in behavior. Among numerous factors on cognitive control, bilingual experience is a substantial and indispensible factor. It has been reported that bilingual experience can influence the neural activity of cognitive control as whole. However, it still remains unknown how the neural influences specifically present on the components of cognitive control imposed by bilingualism. In order to explore the further issue, the study applied fMRI, used anti-saccade paradigm and compared the cerebral activations between high and low proficient Chinese-English bilinguals. Meanwhile, the study provided experimental evidence for the brain plasticity of language, and offered necessary bases on the interplay between language and cognitive control. The results showed that response suppression recruited the middle frontal gyrus (MFG) in low proficient Chinese-English bilinguals, but the inferior patrietal lobe in high proficient Chinese-English bilinguals. Inhibitory control engaged the superior temporal gyrus (STG) and middle temporal gyrus (MTG) in low proficient Chinese-English bilinguals, yet the right insula cortex was more active in high proficient Chinese-English bilinguals during the process. These findings illustrate insights that bilingual experience has neural influences on different components of cognitive control. Compared with low proficient bilinguals, high proficient bilinguals turn to activate advanced neural areas for the processing of cognitive control. In addition, with the acquisition and accumulation of language, language experience takes effect on the brain plasticity and changes the neural basis of cognitive control.Keywords: bilingual experience, cognitive control, inhibition control, response suppression
Procedia PDF Downloads 483872 Gendering the Political Crisis in Hong Kong: A Cultural Analysis of Spectatorship on Marvel Superhero Movies in Hong Kong
Authors: Chi S. Lee
Abstract:
Marvel superhero movies have obtained its unprecedented popularity around the globe. It is a dominant narrative in current scholarship on superhero studies that the political trauma of America, such as attack of September 11, and the masculinity represented in superhero genre are symbolically connected in a way of remasculinization, a standardized plot that before becoming a superhero, a man has to overcome its trauma in his life. Through this standardized plot, American audience finds their pleasure in the spectatorship of equating this plot of remasculinization with the situation of America, rewriting their traumatic memory and resolving around the economic, social, political, and psychological instability of precarity in their own context. Shifting the context to Hong Kong, where Marvel superhero movies have been reaching its dominant status in the local film market, this analysis finds its limitation in explaining the connection between text and context. This article aims to retain this connection through investigation of the Hong Kong audience’s spectatorship. It is argued that the masculinity represented in Marvel superhero movies no longer fits into the stereotypical image of superhero, but presents itself in crisis. This crisis is resolved by the technological excess of the superpower, namely, technological remasculinization. The technological remasculinization offers a sense of futurity through which it is felt that this remasculinization can be achieved in the foreseeable future instead of remaining imaginary and fictional. In this way, the political crisis of Hong Kong is gendered as masculinity in crisis which is worth being remasculinized in the future. This gendering process is a historical product as the symbolic equation between politics and masculinity has for long been encoded in the colonial history of Hong Kong. In short, Marvel superhero’s masculinity offers a sense of masculine hope for the Hong Kong audiences to overcome the political crisis they confront in reality through a postponed identification with the superhero’s masculinity. After the discussion of the Hong Kong audience’s spectatorship on Marvel superhero movies with the insights casted by spectatorship theory, above idea is generated.Keywords: political crisis in Hong Kong, Marvel superhero movies, spectatorship, technological remasculinization
Procedia PDF Downloads 279871 Exploring Faculty Attitudes about Grades and Alternative Approaches to Grading: Pilot Study
Authors: Scott Snyder
Abstract:
Grading approaches in higher education have not changed meaningfully in over 100 years. While there is variation in the types of grades assigned across countries, most use approaches based on simple ordinal scales (e.g, letter grades). While grades are generally viewed as an indication of a student's performance, challenges arise regarding the clarity, validity, and reliability of letter grades. Research about grading in higher education has primarily focused on grade inflation, student attitudes toward grading, impacts of grades, and benefits of plus-minus letter grade systems. Little research is available about alternative approaches to grading, varying approaches used by faculty within and across colleges, and faculty attitudes toward grades and alternative approaches to grading. To begin to address these gaps, a survey was conducted of faculty in a sample of departments at three diverse colleges in a southeastern state in the US. The survey focused on faculty experiences with and attitudes toward grading, the degree to which faculty innovate in teaching and grading practices, and faculty interest in alternatives to the point system approach to grading. Responses were received from 104 instructors (21% response rate). The majority reported that teaching accounted for 50% or more of their academic duties. Almost all (92%) of respondents reported using point and percentage systems for their grading. While all respondents agreed that grades should reflect the degree to which objectives were mastered, half indicated that grades should also reflect effort or improvement. Over 60% felt that grades should be predictive of success in subsequent courses or real life applications. Most respondents disagreed that grades should compare students to other students. About 42% worried about their own grade inflation and grade inflation in their college. Only 17% disagreed that grades mean different things based on the instructor while 75% thought it would be good if there was agreement. Less than 50% of respondents felt that grades were directly useful for identifying students who should/should not continue, identify strengths/weaknesses, predict which students will be most successful, or contribute to program monitoring of student progress. Instructors were less willing to modify assessment than they were to modify instruction and curriculum. Most respondents (76%) were interested in learning about alternative approaches to grading (e.g., specifications grading). The factors that were most associated with willingness to adopt a new grading approach were clarity to students and simplicity of adoption of the approach. Follow-up studies are underway to investigate implementations of alternative grading approaches, expand the study to universities and departments not involved in the initial study, examine student attitudes about alternative approaches, and refine the measure of attitude toward adoption of alternative grading practices within the survey. Workshops about challenges of using percentage and point systems for determining grades and workshops regarding alternative approaches to grading are being offered.Keywords: alternative approaches to grading, grades, higher education, letter grades
Procedia PDF Downloads 96870 The Systems Biology Verification Endeavor: Harness the Power of the Crowd to Address Computational and Biological Challenges
Authors: Stephanie Boue, Nicolas Sierro, Julia Hoeng, Manuel C. Peitsch
Abstract:
Systems biology relies on large numbers of data points and sophisticated methods to extract biologically meaningful signal and mechanistic understanding. For example, analyses of transcriptomics and proteomics data enable to gain insights into the molecular differences in tissues exposed to diverse stimuli or test items. Whereas the interpretation of endpoints specifically measuring a mechanism is relatively straightforward, the interpretation of big data is more complex and would benefit from comparing results obtained with diverse analysis methods. The sbv IMPROVER project was created to implement solutions to verify systems biology data, methods, and conclusions. Computational challenges leveraging the wisdom of the crowd allow benchmarking methods for specific tasks, such as signature extraction and/or samples classification. Four challenges have already been successfully conducted and confirmed that the aggregation of predictions often leads to better results than individual predictions and that methods perform best in specific contexts. Whenever the scientific question of interest does not have a gold standard, but may greatly benefit from the scientific community to come together and discuss their approaches and results, datathons are set up. The inaugural sbv IMPROVER datathon was held in Singapore on 23-24 September 2016. It allowed bioinformaticians and data scientists to consolidate their ideas and work on the most promising methods as teams, after having initially reflected on the problem on their own. The outcome is a set of visualization and analysis methods that will be shared with the scientific community via the Garuda platform, an open connectivity platform that provides a framework to navigate through different applications, databases and services in biology and medicine. We will present the results we obtained when analyzing data with our network-based method, and introduce a datathon that will take place in Japan to encourage the analysis of the same datasets with other methods to allow for the consolidation of conclusions.Keywords: big data interpretation, datathon, systems toxicology, verification
Procedia PDF Downloads 278869 Development of Excellent Water-Repellent Coatings for Metallic and Ceramic Surfaces
Authors: Aditya Kumar
Abstract:
One of the most fascinating properties of various insects and plant surfaces in nature is their water-repellent (superhydrophobicity) capability. The nature offers new insights to learn and replicate the same in designing artificial superhydrophobic structures for a wide range of applications such as micro-fluidics, micro-electronics, textiles, self-cleaning surfaces, anti-corrosion, anti-fingerprint, oil/water separation, etc. In general, artificial superhydrophobic surfaces are synthesized by creating roughness and then treating the surface with low surface energy materials. In this work, various super-hydrophobic coatings on metallic surfaces (aluminum, steel, copper, steel mesh) were synthesized by chemical etching process using different etchants and fatty acid. Also, SiO2 nano/micro-particles embedded polyethylene, polystyrene, and poly(methyl methacrylate) superhydrophobic coatings were synthesized on glass substrates. Also, the effect of process parameters such as etching time, etchant concentration, and particle concentration on wettability was studied. To know the applications of the coatings, surface morphology, contact angle, self-cleaning, corrosion-resistance, and water-repellent characteristics were investigated at various conditions. Furthermore, durabilities of coatings were also studied by performing thermal, ultra-violet, and mechanical stability tests. The surface morphology confirms the creation of rough microstructures by chemical etching or by embedding particles, and the contact angle measurements reveal the superhydrophobic nature. Experimentally it is found that the coatings have excellent self-cleaning, anti-corrosion and water-repellent nature. These coatings also withstand mechanical disturbances such surface bending, adhesive peeling, and abrasion. Coatings are also found to be thermal and ultra-violet stable. Additionally, coatings are also reproducible. Hence aforesaid durable superhydrophobic surfaces have many potential industrial applications.Keywords: superhydrophobic, water-repellent, anti-corrosion, self-cleaning
Procedia PDF Downloads 295868 Nutrient Content and Labelling Status of Pre-Packaged Beverages in Saudi Arabia
Authors: Ruyuf Y. Alnafisah, Nouf S. Alammari, Amani S. Alqahtani
Abstract:
Background: Beverage choice can have implications for the risk of non-communicable diseases. However, there is a lack of knowledge in assessing the nutritional content of these beverages. This study aims to describe the nutrient content of pre-packaged beverages available in the Saudi market. Design: Data were collected from the Saudi Branded Food Data-base (SBFD). Nutrient content was standardized in terms of units and reference volumes to ensure consistency in analysis. Results: A total of 1490 beverages were analyzed. The highest median levels of the majority of nutrients were found among dairy products; energy (68.4(43-188]kcal/100 ml in a milkshake); protein (8.2(0.5-8.2]g/100 ml in yogurt drinks); total fat (2.1(1.3-3.5]g/100 ml in milk); saturated fat (1.4(0-1.4]g/100 ml in yogurt drinks); cholesterol (30(0-30]mg/100 ml in yogurt drinks); sodium (65(65-65].4mg/100 ml in yogurt drinks); and total sugars (12.9(7.5-27]g/100 ml in milkshake). Carbohydrate level was the highest in nectar (13(11.8-14.2] g/100ml]; fruits drinks (12.9(11.9-13.9] g/100ml), and sparkling juices (12.9(8.8-14] g/100ml). The highest added sugar level was observed among regular soft drinks (12(10.8-14] g/100ml). The average rate of nutrient declaration was 60.95%. Carbo-hydrate had the highest declaration rate among nutrients (99.1%), and yogurt drinks had the highest declaration rate among beverage categories (92.7%). The median content of vitamins A and D in dairy products met the mandatory addition levels. Conclusion: This study provides valuable insights into the nutrient content of pre-packaged beverages in the Saudi market. It serves as a foundation for future research and monitoring. The findings of the study support the idea of taxing sugary beverages and raise concerns about the health effects of high sugar in fruit juices. Despite the inclusion of vitamins D and A in dairy products, the study highlights the need for alternative strategies to address these deficiencies.Keywords: pre-packaged beverages, nutrients content, nutrients declaration, daily percentage value, mandatory addition of vitamins
Procedia PDF Downloads 58867 Bioinformatics Approach to Identify Physicochemical and Structural Properties Associated with Successful Cell-free Protein Synthesis
Authors: Alexander A. Tokmakov
Abstract:
Cell-free protein synthesis is widely used to synthesize recombinant proteins. It allows genome-scale expression of various polypeptides under strictly controlled uniform conditions. However, only a minor fraction of all proteins can be successfully expressed in the systems of protein synthesis that are currently used. The factors determining expression success are poorly understood. At present, the vast volume of data is accumulated in cell-free expression databases. It makes possible comprehensive bioinformatics analysis and identification of multiple features associated with successful cell-free expression. Here, we describe an approach aimed at identification of multiple physicochemical and structural properties of amino acid sequences associated with protein solubility and aggregation and highlight major correlations obtained using this approach. The developed method includes: categorical assessment of the protein expression data, calculation and prediction of multiple properties of expressed amino acid sequences, correlation of the individual properties with the expression scores, and evaluation of statistical significance of the observed correlations. Using this approach, we revealed a number of statistically significant correlations between calculated and predicted features of protein sequences and their amenability to cell-free expression. It was found that some of the features, such as protein pI, hydrophobicity, presence of signal sequences, etc., are mostly related to protein solubility, whereas the others, such as protein length, number of disulfide bonds, content of secondary structure, etc., affect mainly the expression propensity. We also demonstrated that amenability of polypeptide sequences to cell-free expression correlates with the presence of multiple sites of post-translational modifications. The correlations revealed in this study provide a plethora of important insights into protein folding and rationalization of protein production. The developed bioinformatics approach can be of practical use for predicting expression success and optimizing cell-free protein synthesis.Keywords: bioinformatics analysis, cell-free protein synthesis, expression success, optimization, recombinant proteins
Procedia PDF Downloads 419866 The Impact of Cryptocurrency Classification on Money Laundering: Analyzing the Preferences of Criminals for Stable Coins, Utility Coins, and Privacy Tokens
Authors: Mohamed Saad, Huda Ismail
Abstract:
The purpose of this research is to examine the impact of cryptocurrency classification on money laundering crimes and to analyze how the preferences of criminals differ according to the type of digital currency used. Specifically, we aim to explore the roles of stablecoins, utility coins, and privacy tokens in facilitating or hindering money laundering activities and to identify the key factors that influence the choices of criminals in using these cryptocurrencies. To achieve our research objectives, we used a dataset for the most highly traded cryptocurrencies (32 currencies) that were published on the coin market cap for 2022. In addition to conducting a comprehensive review of the existing literature on cryptocurrency and money laundering, with a focus on stablecoins, utility coins, and privacy tokens, Furthermore, we conducted several Multivariate analyses. Our study reveals that the classification of cryptocurrency plays a significant role in money laundering activities, as criminals tend to prefer certain types of digital currencies over others, depending on their specific needs and goals. Specifically, we found that stablecoins are more commonly used in money laundering due to their relatively stable value and low volatility, which makes them less risky to hold and transfer. Utility coins, on the other hand, are less frequently used in money laundering due to their lack of anonymity and limited liquidity. Finally, privacy tokens, such as Monero and Zcash, are increasingly becoming a preferred choice among criminals due to their high degree of privacy and untraceability. In summary, our study highlights the importance of understanding the nuances of cryptocurrency classification in the context of money laundering and provides insights into the preferences of criminals in using digital currencies for illegal activities. Based on our findings, our recommendation to the policymakers is to address the potential misuse of cryptocurrencies for money laundering. By implementing measures to regulate stable coins, strengthening cross-border cooperation, fostering public-private partnerships, and increasing cooperation, policymakers can help prevent and detect money laundering activities involving digital currencies.Keywords: crime, cryptocurrency, money laundering, tokens.
Procedia PDF Downloads 87865 A Comparitive Study of the Effect of Stress on the Cognitive Parameters in Women with Increased Body Mass Index before and after Menopause
Authors: Ramesh Bhat, Ammu Somanath, A. K. Nayanatara
Abstract:
Background: The increasing prevalence of overweight and obesity is a critical public health problem for women. The negative effect of stress on memory and cognitive functions has been widely explored for decades in numerous research projects using a wide range of methodology. Deterioration of memory and other brain functions are hallmarks of Alzheimer’s disease. Estrogen fluctuations and withdrawal have myriad direct effects on the central nervous system that have the potential to influence cognitive functions. Aim: The present study aims to compare the effect of stress on the cognitive functions in overweight/obese women before and after menopause. Material and Methods: A total of 142 female subjects constituting women before menopause between the age group of 18–44 years and women after menopause between the age group of 45–60 years were included in the sample. Participants were categorized into overweight/obese groups based on the body mass index. The Perceived Stress Scale (PSS) the major tool was used for measuring the perception of stress. Based on the stress scale measurement each group was classified into with stress and without stress. Addenbrooke’s cognitive Examination-III was used for measuring the cognitive functions. Results: Premenopausal women with stress showed a significant (P<0.05) decrease in the cognitive parameters such as attention and orientation Fluency, language and visuospatial ability. Memory did not show any significant change in this group. Whereas, in the postmenopausal stressed women all the cognitive functions except fluency showed a significant (P<0.05) decrease after menopause stressed group. Conclusion: Stress is a significant factor on the cognitive functions of obese and overweight women before and after menopause. Practice of Yoga, Encouragement in activities like gardening, embroidery, games and relaxation techniques should be recommended to prevent stress. Insights into the neurobiology before and after menopause can be gained from future studies examining the effect on the HPA axis in relation to cognition and stress.Keywords: cognition, stress, premenopausal, body mass index
Procedia PDF Downloads 305864 The Predictive Utility of Subjective Cognitive Decline Using Item Level Data from the Everyday Cognition (ECog) Scales
Authors: J. Fox, J. Randhawa, M. Chan, L. Campbell, A. Weakely, D. J. Harvey, S. Tomaszewski Farias
Abstract:
Early identification of individuals at risk for conversion to dementia provides an opportunity for preventative treatment. Many older adults (30-60%) report specific subjective cognitive decline (SCD); however, previous research is inconsistent in terms of what types of complaints predict future cognitive decline. The purpose of this study is to identify which specific complaints from the Everyday Cognition Scales (ECog) scales, a measure of self-reported concerns for everyday abilities across six cognitive domains, are associated with: 1) conversion from a clinical diagnosis of normal to either MCI or dementia (categorical variable) and 2) progressive cognitive decline in memory and executive function (continuous variables). 415 cognitively normal older adults were monitored annually for an average of 5 years. Cox proportional hazards models were used to assess associations between self-reported ECog items and progression to impairment (MCI or dementia). A total of 114 individuals progressed to impairment; the mean time to progression was 4.9 years (SD=3.4 years, range=0.8-13.8). Follow-up models were run controlling for depression. A subset of individuals (n=352) underwent repeat cognitive assessments for an average of 5.3 years. For those individuals, mixed effects models with random intercepts and slopes were used to assess associations between ECog items and change in neuropsychological measures of episodic memory or executive function. Prior to controlling for depression, subjective concerns on five of the eight Everyday Memory items, three of the nine Everyday Language items, one of the seven Everyday Visuospatial items, two of the five Everyday Planning items, and one of the six Everyday Organization items were associated with subsequent diagnostic conversion (HR=1.25 to 1.59, p=0.003 to 0.03). However, after controlling for depression, only two specific complaints of remembering appointments, meetings, and engagements and understanding spoken directions and instructions were associated with subsequent diagnostic conversion. Episodic memory in individuals reporting no concern on ECog items did not significantly change over time (p>0.4). More complaints on seven of the eight Everyday Memory items, three of the nine Everyday Language items, and three of the seven Everyday Visuospatial items were associated with a decline in episodic memory (Interaction estimate=-0.055 to 0.001, p=0.003 to 0.04). Executive function in those reporting no concern on ECog items declined slightly (p <0.001 to 0.06). More complaints on three of the eight Everyday Memory items and three of the nine Everyday Language items were associated with a decline in executive function (Interaction estimate=-0.021 to -0.012, p=0.002 to 0.04). These findings suggest that specific complaints across several cognitive domains are associated with diagnostic conversion. Specific complaints in the domains of Everyday Memory and Language are associated with a decline in both episodic memory and executive function. Increased monitoring and treatment of individuals with these specific SCD may be warranted.Keywords: alzheimer’s disease, dementia, memory complaints, mild cognitive impairment, risk factors, subjective cognitive decline
Procedia PDF Downloads 80863 Descriptive Assessment of Health and Safety Regulations and Its Current Situation in the Construction Industry of Pakistan
Authors: Khawaja A. Wahaj Wani, Aykut Erkal
Abstract:
Pakistan's construction industry, a key player in economic development, has experienced remarkable growth. However, the surge in activities has been accompanied by dangerous working conditions, attributed to legislative gaps and flaws. Unhealthy construction practices, uncertain site conditions, and hazardous environments contribute to a concerning rate of injuries and fatalities. The principal aim of this research study is to undertake a thorough evaluation based on the assessment of the current situation of Health & Safety policies and the surveys performed by stakeholders of Pakistan with the aim of providing solution-centric methodologies for the enforcement of health and safety regulations within construction companies operating on project sites. Recognizing the pivotal role that the construction industry plays in bolstering a nation's economy, it is imperative to address the pressing need for heightened awareness among site engineers and laborers. The study adopts a robust approach, utilizing questionnaire surveys and interviews. As an exclusive investigative study, it encompasses all stakeholders: clients, consultants, contractors, and subcontractors. Targeting PEC-registered companies. Safety performance was assessed through the examination of sixty safety procedures using SPSS-18. A high Cronbach's alpha value of 0.958 ensures data reliability, and non-parametric tests were employed due to the non-normal distribution of data. The safety performance evaluation revealed significant insights. "Using Hoists and Cranes" and "Precautionary Measures (Shoring and Excavation)" exhibited commendable safety levels. Conversely, "Trainings on Safety" displayed a lower safety performance, alongside areas such as "Safety in Contract Documentation," "Meetings for Safety," and "Worker Participation," indicating room for improvement. These findings provide stakeholders with a detailed understanding of current safety measures within Pakistan's construction industry.Keywords: construction industry, health and safety regulations, Pakistan, risk management
Procedia PDF Downloads 55862 A Validated Estimation Method to Predict the Interior Wall of Residential Buildings Based on Easy to Collect Variables
Authors: B. Gepts, E. Meex, E. Nuyts, E. Knaepen, G. Verbeeck
Abstract:
The importance of resource efficiency and environmental impact assessment has raised the interest in knowing the amount of materials used in buildings. If no BIM model or energy performance certificate is available, material quantities can be obtained through an estimation or time-consuming calculation. For the interior wall area, no validated estimation method exists. However, in the case of environmental impact assessment or evaluating the existing building stock as future material banks, knowledge of the material quantities used in interior walls is indispensable. This paper presents a validated method for the estimation of the interior wall area for dwellings based on easy-to-collect building characteristics. A database of 4963 residential buildings spread all over Belgium is used. The data are collected through onsite measurements of the buildings during the construction phase (between mid-2010 and mid-2017). The interior wall area refers to the area of all interior walls in the building, including the inner leaf of exterior (party) walls, minus the area of windows and doors, unless mentioned otherwise. The two predictive modelling techniques used are 1) a (stepwise) linear regression and 2) a decision tree. The best estimation method is selected based on the best R² k-fold (5) fit. The research shows that the building volume is by far the most important variable to estimate the interior wall area. A stepwise regression based on building volume per building, building typology, and type of house provides the best fit, with R² k-fold (5) = 0.88. Although the best R² k-fold value is obtained when the other parameters ‘building typology’ and ‘type of house’ are included, the contribution of these variables can be seen as statistically significant but practically irrelevant. Thus, if these parameters are not available, a simplified estimation method based on only the volume of the building can also be applied (R² k-fold = 0.87). The robustness and precision of the method (output) are validated three times. Firstly, the prediction of the interior wall area is checked by means of alternative calculations of the building volume and of the interior wall area; thus, other definitions are applied to the same data. Secondly, the output is tested on an extension of the database, so it has the same definitions but on other data. Thirdly, the output is checked on an unrelated database with other definitions and other data. The validation of the estimation methods demonstrates that the methods remain accurate when underlying data are changed. The method can support environmental as well as economic dimensions of impact assessment, as it can be used in early design. As it allows the prediction of the amount of interior wall materials to be produced in the future or that might become available after demolition, the presented estimation method can be part of material flow analyses on input and on output.Keywords: buildings as material banks, building stock, estimation method, interior wall area
Procedia PDF Downloads 31861 Enhancing Health Information Management with Smart Rings
Authors: Bhavishya Ramchandani
Abstract:
A little electronic device that is worn on the finger is called a smart ring. It incorporates mobile technology and has features that make it simple to use the device. These gadgets, which resemble conventional rings and are usually made to fit on the finger, are outfitted with features including access management, gesture control, mobile payment processing, and activity tracking. A poor sleep pattern, an irregular schedule, and bad eating habits are all part of the problems with health that a lot of people today are facing. Diets lacking fruits, vegetables, legumes, nuts, and whole grains are common. Individuals in India also experience metabolic issues. In the medical field, smart rings will help patients with problems relating to stomach illnesses and the incapacity to consume meals that are tailored to their bodies' needs. The smart ring tracks all bodily functions, including blood sugar and glucose levels, and presents the information instantly. Based on this data, the ring generates what the body will find to be perfect insights and a workable site layout. In addition, we conducted focus groups and individual interviews as part of our core approach and discussed the difficulties they're having maintaining the right diet, as well as whether or not the smart ring will be beneficial to them. However, everyone was very enthusiastic about and supportive of the concept of using smart rings in healthcare, and they believed that these rings may assist them in maintaining their health and having a well-balanced diet plan. This response came from the primary data, and also working on the Emerging Technology Canvas Analysis of smart rings in healthcare has led to a significant improvement in our understanding of the technology's application in the medical field. It is believed that there will be a growing demand for smart health care as people become more conscious of their health. The majority of individuals will finally utilize this ring after three to four years when demand for it will have increased. Their daily lives will be significantly impacted by it.Keywords: smart ring, healthcare, electronic wearable, emerging technology
Procedia PDF Downloads 64860 Comparison of Gait Variability in Individuals with Trans-Tibial and Trans-Femoral Lower Limb Loss: A Pilot Study
Authors: Hilal Keklicek, Fatih Erbahceci, Elif Kirdi, Ali Yalcin, Semra Topuz, Ozlem Ulger, Gul Sener
Abstract:
Objectives and Goals: The stride-to-stride fluctuations in gait is a determinant of qualified locomotion as known as gait variability. Gait variability is an important predictive factor of fall risk and useful for monitoring the effects of therapeutic interventions and rehabilitation. Comparison of gait variability in individuals with trans-tibial lower limb loss and trans femoral lower limb loss was the aim of the study. Methods: Ten individuals with traumatic unilateral trans femoral limb loss(TF), 12 individuals with traumatic transtibial lower limb loss(TT) and 12 healthy individuals(HI) were the participants of the study. All participants were evaluated with treadmill. Gait characteristics including mean step length, step length variability, ambulation index, time on each foot of participants were evaluated with treadmill. Participants were walked at their preferred speed for six minutes. Data from 4th minutes to 6th minutes were selected for statistical analyses to eliminate learning effect. Results: There were differences between the groups in intact limb step length variation, time on each foot, ambulation index and mean age (p < .05) according to the Kruskal Wallis Test. Pairwise analyses showed that there were differences between the TT and TF in residual limb variation (p=.041), time on intact foot (p=.024), time on prosthetic foot(p=.024), ambulation index(p = .003) in favor of TT group. There were differences between the TT and HI group in intact limb variation (p = .002), time on intact foot (p<.001), time on prosthetic foot (p < .001), ambulation index result (p < .001) in favor of HI group. There were differences between the TF and HI group in intact limb variation (p = .001), time on intact foot (p=.01) ambulation index result (p < .001) in favor of HI group. There was difference between the groups in mean age result from HI group were younger (p < .05).There were similarity between the groups in step lengths (p>.05) and time of prosthesis using in individuals with lower limb loss (p > .05). Conclusions: The pilot study provided basic data about gait stability in individuals with traumatic lower limb loss. Results of the study showed that to evaluate the gait differences between in different amputation level, long-range gait analyses methods may be useful to get more valuable information. On the other hand, similarity in step length may be resulted from effective prosthetic using or effective gait rehabilitation, in conclusion, all participants with lower limb loss were already trained. The differences between the TT and HI; TF and HI may be resulted from the age related features, therefore, age matched population in HI were recommended future studies. Increasing the number of participants and comparison of age-matched groups also recommended to generalize these result.Keywords: lower limb loss, amputee, gait variability, gait analyses
Procedia PDF Downloads 280859 Post-Disaster Recovery and Impacts on Construction Resources: Case Studies of Queensland Catastrophic Events
Authors: Scott A. Abbott
Abstract:
This paper examines the increase in the occurrence of natural disasters worldwide and the need to support vulnerable communities in post-disaster recovery. Preparation and implementation of post-disaster recovery projects need to be improved to allow communities to recover infrastructure, housing, economically and socially following a catastrophe. With the continual rise in catastrophic events worldwide due to climate change, impacts on construction resources affect the ability for post-disaster recovery to be undertaken. This research focuses on case studies of catastrophic events in Queensland, Australia, to contribute to the body of knowledge and gain valuable insights on lessons learned from past events and how they have been managed. The aim of this research is to adopt qualitative data using semi-structured interviews from participants predominantly from the insurance sector to understand barriers that have previously and currently exist in post-disaster recovery. Existing literature was reviewed to reveal gaps in knowledge that needed to be tested. Qualitative data was collected and summarised from field research with the results analysed and discussed. Barriers that impacted post-disaster recovery included time, cost, and resource capability and capacity. Causal themes that impacted time and cost were identified as decision making, pre-planning, and preparedness, as well as effective communication across stakeholders. The research study applied a qualitative approach to the existing literature and case studies across Queensland, Australia, to identify existing and new barriers that impact post-disaster recovery. It was recommended to implement effective procurement strategies to assist in cost control; implement pre-planning and preparedness strategies across funder, contractor, and local governments; more effective and timely decision making to reduce time and cost impacts.Keywords: construction recovery, cost, disaster recovery, resources, time
Procedia PDF Downloads 127858 Geophysical Mapping of Anomalies Associated with Sediments of Gwandu Formation Around Argungu and Its Environs NW, Nigeria
Authors: Adamu Abubakar, Abdulganiyu Yunusa, Likkason Othniel Kamfani, Abdulrahman Idris Augie
Abstract:
This research study is being carried out in accordance with the Gwandu formation's potential exploratory activities in the inland basin of northwest Nigeria.The present research aims to identify and characterize subsurface anomalies within Gwandu formation using electrical resistivity tomography (ERT) and magnetic surveys, providing valuable insights for mineral exploration. The study utilizes various data enhancement techniques like derivatives, upward continuation, and spectral analysis alongside 2D modeling of electrical imaging profiles to analyze subsurface structures and anomalies. Data was collected through ERT and magnetic surveys, with subsequent processing including derivatives, spectral analysis, and 2D modeling. The results indicate significant subsurface structures such as faults, folds, and sedimentary layers. The study area's geoelectric and magnetic sections illustrate the depth and distribution of sedimentary formations, enhancing understanding of the geological framework. Thus, showed that the entire formations of Eocene sediment of Gwandu are overprinted by the study area's Tertiary strata. The NE to SW and E to W cross-profile for the pseudo geoelectric sections beneath the study area were generated using a two-dimensional (2D) electrical resistivity imaging. 2D magnetic modelling, upward continuation, and derivative analysis are used to delineate the signatures of subsurface magnetic anomalies. The results also revealed The sediment thickness by surface depth ranges from ∼4.06 km and ∼23.31 km. The Moho interface, the lower and upper mantle crusts boundary, and magnetic crust are all located at depths of around ∼10.23 km. The vertical distance between the local models of the foundation rocks to the north and south of the Sokoto Group was approximately ∼6 to ∼8 km and ∼4.5 km, respectively.Keywords: high-resolution aeromagnetic data, electrical resistivity imaging, subsurface anomalies, 2d dorward modeling
Procedia PDF Downloads 14857 Uncertainty and Multifunctionality as Bridging Concepts from Socio-Ecological Resilience to Infrastructure Finance in Water Resource Decision Making
Authors: Anita Lazurko, Laszlo Pinter, Jeremy Richardson
Abstract:
Uncertain climate projections, multiple possible development futures, and a financing gap create challenges for water infrastructure decision making. In contrast to conventional predict-plan-act methods, an emerging decision paradigm that enables social-ecological resilience supports decisions that are appropriate for uncertainty and leverage social, ecological, and economic multifunctionality. Concurrently, water infrastructure project finance plays a powerful role in sustainable infrastructure development but remains disconnected from discourse in socio-ecological resilience. At the time of research, a project to transfer water from Lesotho to Botswana through South Africa in the Orange-Senqu River Basin was at the pre-feasibility stage. This case was analysed through documents and interviews to investigate how uncertainty and multifunctionality are conceptualised and considered in decisions for the resilience of water infrastructure and to explore bridging concepts that might allow project finance to better enable socio-ecological resilience. Interviewees conceptualised uncertainty as risk, ambiguity and ignorance, and multifunctionality as politically-motivated shared benefits. Numerous efforts to adopt emerging decision methods that consider these terms were in use but required compromises to accommodate the persistent, conventional decision paradigm, though a range of future opportunities was identified. Bridging these findings to finance revealed opportunities to consider a more comprehensive scope of risk, to leverage risk mitigation measures, to diffuse risks and benefits over space, time and to diverse actor groups, and to clarify roles to achieve multiple objectives for resilience. In addition to insights into how multiple decision paradigms interact in real-world decision contexts, the research highlights untapped potential at the juncture between socio-ecological resilience and project finance.Keywords: socio-ecological resilience, finance, multifunctionality, uncertainty
Procedia PDF Downloads 126856 Mastering Digital Transformation with the Strategy Tandem Innovation Inside-Out/Outside-In: An Approach to Drive New Business Models, Services and Products in the Digital Age
Authors: S. N. Susenburger, D. Boecker
Abstract:
In the age of Volatility, Uncertainty, Complexity, and Ambiguity (VUCA), where digital transformation is challenging long standing traditional hardware and manufacturing companies, innovation needs a different methodology, strategy, mindset, and culture. What used to be a mindset of scaling per quantity is now shifting to orchestrating ecosystems, platform business models and service bundles. While large corporations are trying to mimic the nimbleness and versatile mindset of startups in the core of their digital strategies, they’re at the frontier of facing one of the largest organizational and cultural changes in history. This paper elaborates on how a manufacturing giant transformed its Corporate Information Technology (IT) to enable digital and Internet of Things (IoT) business while establishing the mindset and the approaches of the Innovation Inside-Out/Outside-In Strategy. It gives insights into the core elements of an innovation culture and the tactics and methodologies leveraged to support the cultural shift and transformation into an IoT company. This paper also outlines the core elements for an innovation culture and how the persona 'Connected Engineer' thrives in the digital innovation environment. Further, it explores how tapping domain-focused ecosystems in vibrant innovative cities can be used as a part of the strategy to facilitate partner co-innovation. Therefore, findings from several use cases, observations and surveys led to conclusion for the strategy tandem of Innovation Inside-Out/Outside-In. The findings indicate that it's crucial in which phases and maturity level the Innovation Inside-Out/Outside-In Strategy is activated: cultural aspects of the business and the regional ecosystem need to be considered, as well as cultural readiness from management and active contributors. The 'not invented here syndrome' is a barrier of large corporations that need to be addressed and managed to successfully drive partnerships, as well as embracing co-innovation and a mindset shifting away from physical products toward new business models, services, and IoT platforms. This paper elaborates on various methodologies and approaches tested in different countries and cultures, including the U.S., Brazil, Mexico, and Germany.Keywords: innovation management, innovation culture, innovation methodologies, digital transformation
Procedia PDF Downloads 146855 A Tool Tuning Approximation Method: Exploration of the System Dynamics and Its Impact on Milling Stability When Amending Tool Stickout
Authors: Nikolai Bertelsen, Robert A. Alphinas, Klaus B. Orskov
Abstract:
The shortest possible tool stickout has been the traditional go-to approach with expectations of increased stability and productivity. However, experimental studies at Danish Advanced Manufacturing Research Center (DAMRC) have proven that for some tool stickout lengths, there exist local productivity optimums when utilizing the Stability Lobe Diagrams for chatter avoidance. This contradicts with traditional logic and the best practices taught to machinists. This paper explores the vibrational characteristics and behaviour of a milling system over the tool stickout length. The experimental investigation has been conducted by tap testing multiple endmills where the tool stickout length has been varied. For each length, the modal parameters have been recorded and mapped to visualize behavioural tendencies. Furthermore, the paper explores the correlation between the modal parameters and the Stability Lobe Diagram to outline the influence and importance of each parameter in a multi-mode system. The insights are conceptualized into a tool tuning approximation solution. It builds on an almost linear change in the natural frequencies when amending tool stickout, which results in changed positions of the Chatter-free Stability Lobes. Furthermore, if the natural frequency of two modes become too close, it will onset of the dynamic absorber effect phenomenon. This phenomenon increases the critical stable depth of cut, allowing for a more stable milling process. Validation tests on the tool tuning approximation solution have shown varying success of the solution. This outlines the need for further research on the boundary conditions of the solution to understand at which conditions the tool tuning approximation solution is applicable. If the conditions get defined, the conceptualized tool tuning approximation solution outlines an approach for quick and roughly approximating tool stickouts with the potential for increased stiffness and optimized productivity.Keywords: milling, modal parameters, stability lobes, tap testing, tool tuning
Procedia PDF Downloads 157854 Spatial Data Science for Data Driven Urban Planning: The Youth Economic Discomfort Index for Rome
Authors: Iacopo Testi, Diego Pajarito, Nicoletta Roberto, Carmen Greco
Abstract:
Today, a consistent segment of the world’s population lives in urban areas, and this proportion will vastly increase in the next decades. Therefore, understanding the key trends in urbanization, likely to unfold over the coming years, is crucial to the implementation of sustainable urban strategies. In parallel, the daily amount of digital data produced will be expanding at an exponential rate during the following years. The analysis of various types of data sets and its derived applications have incredible potential across different crucial sectors such as healthcare, housing, transportation, energy, and education. Nevertheless, in city development, architects and urban planners appear to rely mostly on traditional and analogical techniques of data collection. This paper investigates the prospective of the data science field, appearing to be a formidable resource to assist city managers in identifying strategies to enhance the social, economic, and environmental sustainability of our urban areas. The collection of different new layers of information would definitely enhance planners' capabilities to comprehend more in-depth urban phenomena such as gentrification, land use definition, mobility, or critical infrastructural issues. Specifically, the research results correlate economic, commercial, demographic, and housing data with the purpose of defining the youth economic discomfort index. The statistical composite index provides insights regarding the economic disadvantage of citizens aged between 18 years and 29 years, and results clearly display that central urban zones and more disadvantaged than peripheral ones. The experimental set up selected the city of Rome as the testing ground of the whole investigation. The methodology aims at applying statistical and spatial analysis to construct a composite index supporting informed data-driven decisions for urban planning.Keywords: data science, spatial analysis, composite index, Rome, urban planning, youth economic discomfort index
Procedia PDF Downloads 135853 Quantifying Multivariate Spatiotemporal Dynamics of Malaria Risk Using Graph-Based Optimization in Southern Ethiopia
Authors: Yonas Shuke Kitawa
Abstract:
Background: Although malaria incidence has substantially fallen sharply over the past few years, the rate of decline varies by district, time, and malaria type. Despite this turn-down, malaria remains a major public health threat in various districts of Ethiopia. Consequently, the present study is aimed at developing a predictive model that helps to identify the spatio-temporal variation in malaria risk by multiple plasmodium species. Methods: We propose a multivariate spatio-temporal Bayesian model to obtain a more coherent picture of the temporally varying spatial variation in disease risk. The spatial autocorrelation in such a data set is typically modeled by a set of random effects that assign a conditional autoregressive prior distribution. However, the autocorrelation considered in such cases depends on a binary neighborhood matrix specified through the border-sharing rule. Over here, we propose a graph-based optimization algorithm for estimating the neighborhood matrix that merely represents the spatial correlation by exploring the areal units as the vertices of a graph and the neighbor relations as the series of edges. Furthermore, we used aggregated malaria count in southern Ethiopia from August 2013 to May 2019. Results: We recognized that precipitation, temperature, and humidity are positively associated with the malaria threat in the area. On the other hand, enhanced vegetation index, nighttime light (NTL), and distance from coastal areas are negatively associated. Moreover, nonlinear relationships were observed between malaria incidence and precipitation, temperature, and NTL. Additionally, lagged effects of temperature and humidity have a significant effect on malaria risk by either species. More elevated risk of P. falciparum was observed following the rainy season, and unstable transmission of P. vivax was observed in the area. Finally, P. vivax risks are less sensitive to environmental factors than those of P. falciparum. Conclusion: The improved inference was gained by employing the proposed approach in comparison to the commonly used border-sharing rule. Additionally, different covariates are identified, including delayed effects, and elevated risks of either of the cases were observed in districts found in the central and western regions. As malaria transmission operates in a spatially continuous manner, a spatially continuous model should be employed when it is computationally feasible.Keywords: disease mapping, MSTCAR, graph-based optimization algorithm, P. falciparum, P. vivax, waiting matrix
Procedia PDF Downloads 79