Search results for: dynamic thresholding classification
749 In-silico DFT Study, Molecular Docking, ADMET Predictions, and DMS of Isoxazolidine and Isoxazoline Analogs with Anticancer Properties
Authors: Moulay Driss Mellaoui, Khadija Zaki, Khalid Abbiche, Abdallah Imjjad, Rachid Boutiddar, Abdelouahid Sbai, Aaziz Jmiai, Souad El Issami, Al Mokhtar Lamsabhi, Hanane Zejli
Abstract:
This study presents a comprehensive analysis of six isoxazolidine and isoxazoline derivatives, leveraging a multifaceted approach that combines Density Functional Theory (DFT), AdmetSAR analysis, and molecular docking simulations to explore their electronic, pharmacokinetic, and anticancer properties. Through DFT analysis, using the B3LYP-D3BJ functional and the 6-311++G(d,p) basis set, we optimized molecular geometries, analyzed vibrational frequencies, and mapped Molecular Electrostatic Potentials (MEP), identifying key sites for electrophilic attacks and hydrogen bonding. Frontier Molecular Orbital (FMO) analysis and Density of States (DOS) plots revealed varying stability levels among the compounds, with 1b, 2b, and 3b showing slightly higher stability. Chemical potential assessments indicated differences in binding affinities, suggesting stronger potential interactions for compounds 1b and 2b. AdmetSAR analysis predicted favorable human intestinal absorption (HIA) rates for all compounds, highlighting compound 3b superior oral effectiveness. Molecular docking and molecular dynamics simulations were conducted on isoxazolidine and 4-isoxazoline derivatives targeting the EGFR receptor (PDB: 1JU6). Molecular docking simulations confirmed the high affinity of these compounds towards the target protein 1JU6, particularly compound 3b, among the isoxazolidine derivatives, compound 3b exhibited the most favorable binding energy, with a g score of -8.50 kcal/mol. Molecular dynamics simulations over 100 nanoseconds demonstrated the stability and potential of compound 3b as a superior candidate for anticancer applications, further supported by structural analyses including RMSD, RMSF, Rg, and SASA values. This study underscores the promising role of compound 3b in anticancer treatments, providing a solid foundation for future drug development and optimization efforts.Keywords: isoxazolines, DFT, molecular docking, molecular dynamic, ADMET, drugs.
Procedia PDF Downloads 47748 Preparation of Papers - Developing a Leukemia Diagnostic System Based on Hybrid Deep Learning Architectures in Actual Clinical Environments
Authors: Skyler Kim
Abstract:
An early diagnosis of leukemia has always been a challenge to doctors and hematologists. On a worldwide basis, it was reported that there were approximately 350,000 new cases in 2012, and diagnosing leukemia was time-consuming and inefficient because of an endemic shortage of flow cytometry equipment in current clinical practice. As the number of medical diagnosis tools increased and a large volume of high-quality data was produced, there was an urgent need for more advanced data analysis methods. One of these methods was the AI approach. This approach has become a major trend in recent years, and several research groups have been working on developing these diagnostic models. However, designing and implementing a leukemia diagnostic system in real clinical environments based on a deep learning approach with larger sets remains complex. Leukemia is a major hematological malignancy that results in mortality and morbidity throughout different ages. We decided to select acute lymphocytic leukemia to develop our diagnostic system since acute lymphocytic leukemia is the most common type of leukemia, accounting for 74% of all children diagnosed with leukemia. The results from this development work can be applied to all other types of leukemia. To develop our model, the Kaggle dataset was used, which consists of 15135 total images, 8491 of these are images of abnormal cells, and 5398 images are normal. In this paper, we design and implement a leukemia diagnostic system in a real clinical environment based on deep learning approaches with larger sets. The proposed diagnostic system has the function of detecting and classifying leukemia. Different from other AI approaches, we explore hybrid architectures to improve the current performance. First, we developed two independent convolutional neural network models: VGG19 and ResNet50. Then, using both VGG19 and ResNet50, we developed a hybrid deep learning architecture employing transfer learning techniques to extract features from each input image. In our approach, fusing the features from specific abstraction layers can be deemed as auxiliary features and lead to further improvement of the classification accuracy. In this approach, features extracted from the lower levels are combined into higher dimension feature maps to help improve the discriminative capability of intermediate features and also overcome the problem of network gradient vanishing or exploding. By comparing VGG19 and ResNet50 and the proposed hybrid model, we concluded that the hybrid model had a significant advantage in accuracy. The detailed results of each model’s performance and their pros and cons will be presented in the conference.Keywords: acute lymphoblastic leukemia, hybrid model, leukemia diagnostic system, machine learning
Procedia PDF Downloads 187747 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation
Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk
Abstract:
The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set
Procedia PDF Downloads 219746 Beyond the Tragedy of Absence: Vizenor's Comedy of Native Presence
Authors: Mahdi Sepehrmanesh
Abstract:
This essay explores Gerald Vizenor's innovative concepts of the tragedy of absence and the comedy of presence as frameworks for understanding and challenging dominant narratives about Native American identity and history. Vizenor's work critiques the notion of irrevocable cultural loss and rigid definitions of Indigenous identity based on blood quantum and stereotypical practices. Through subversive humor, trickster figures, and storytelling, Vizenor asserts the active presence and continuance of Native peoples, advocating for a dynamic, self-determined understanding of Native identity. The essay examines Vizenor's use of postmodern techniques, including his engagement with simulation and hyperreality, to disrupt colonial discourses and create new spaces for Indigenous expression. It explores the concept of "crossblood" identities as a means of resisting essentialist notions of Native authenticity and embracing the complexities of contemporary Indigenous experiences. Vizenor's ideas of survivance and transmotion are analyzed as strategies for cultural resilience and adaptation in the face of ongoing colonial pressures. The interplay between absence and presence in Vizenor's work is discussed, particularly through the lens of shadow survivance and the power of storytelling. The essay also delves into Vizenor's critique of terminal creed and his promotion of natural reason as an alternative epistemology to Western rationalism. While acknowledging the significant influence of Vizenor's work on Native American literature and theory, the essay also addresses critiques of his approach, including concerns about the accessibility of his writing and its political effectiveness. Despite these debates, the essay argues that Vizenor's concepts offer a powerful vision of Indigenous futurity that is rooted in tradition yet open to change, inspiring hope and agency in the face of oppression. By examining Vizenor's multifaceted approach to Native American identity and presence, this essay contributes to ongoing discussions about Indigenous representation, cultural continuity, and resistance to colonial narratives in literature and beyond.Keywords: gerald vizenor, identity native american literature, survivance, trickster discourse, identity
Procedia PDF Downloads 34745 Ultrasensitive Detection and Discrimination of Cancer-Related Single Nucleotide Polymorphisms Using Poly-Enzyme Polymer Bead Amplification
Authors: Lorico D. S. Lapitan Jr., Yihan Xu, Yuan Guo, Dejian Zhou
Abstract:
The ability of ultrasensitive detection of specific genes and discrimination of single nucleotide polymorphisms is important for clinical diagnosis and biomedical research. Herein, we report the development of a new ultrasensitive approach for label-free DNA detection using magnetic nanoparticle (MNP) assisted rapid target capture/separation in combination with signal amplification using poly-enzyme tagged polymer nanobead. The sensor uses an MNP linked capture DNA and a biotin modified signal DNA to sandwich bind the target followed by ligation to provide high single-nucleotide polymorphism discrimination. Only the presence of a perfect match target DNA yields a covalent linkage between the capture and signal DNAs for subsequent conjugation of a neutravidin-modified horseradish peroxidase (HRP) enzyme through the strong biotin-nuetravidin interaction. This converts each captured DNA target into an HRP which can convert millions of copies of a non-fluorescent substrate (amplex red) to a highly fluorescent product (resorufin), for great signal amplification. The use of polymer nanobead each tagged with thousands of copies of HRPs as the signal amplifier greatly improves the signal amplification power, leading to greatly improved sensitivity. We show our biosensing approach can specifically detect an unlabeled DNA target down to 10 aM with a wide dynamic range of 5 orders of magnitude (from 0.001 fM to 100.0 fM). Furthermore, our approach has a high discrimination between a perfectly matched gene and its cancer-related single-base mismatch targets (SNPs): It can positively detect the perfect match DNA target even in the presence of 100 fold excess of co-existing SNPs. This sensing approach also works robustly in clinical relevant media (e.g. 10% human serum) and gives almost the same SNP discrimination ratio as that in clean buffers. Therefore, this ultrasensitive SNP biosensor appears to be well-suited for potential diagnostic applications of genetic diseases.Keywords: DNA detection, polymer beads, signal amplification, single nucleotide polymorphisms
Procedia PDF Downloads 249744 A Unified Model for Predicting Particle Settling Velocity in Pipe, Annulus and Fracture
Authors: Zhaopeng Zhu, Xianzhi Song, Gensheng Li
Abstract:
Transports of solid particles through the drill pipe, drill string-hole annulus and hydraulically generated fractures are important dynamic processes encountered in oil and gas well drilling and completion operations. Different from particle transport in infinite space, the transports of cuttings, proppants and formation sand are hindered by a finite boundary. Therefore, an accurate description of the particle transport behavior under the bounded wall conditions encountered in drilling and hydraulic fracturing operations is needed to improve drilling safety and efficiency. In this study, the particle settling experiments were carried out to investigate the particle settling behavior in the pipe, annulus and between the parallel plates filled with power-law fluids. Experimental conditions simulated the particle Reynolds number ranges of 0.01-123.87, the dimensionless diameter ranges of 0.20-0.80 and the fluid flow behavior index ranges of 0.48-0.69. Firstly, the wall effect of the annulus is revealed by analyzing the settling process of the particles in the annular geometry with variable inner pipe diameter. Then, the geometric continuity among the pipe, annulus and parallel plates was determined by introducing the ratio of inner diameter to an outer diameter of the annulus. Further, a unified dimensionless diameter was defined to confirm the relationship between the three different geometry in terms of the wall effect. In addition, a dimensionless term independent from the settling velocity was introduced to establish a unified explicit settling velocity model applicable to pipes, annulus and fractures with a mean relative error of 8.71%. An example case study was provided to demonstrate the application of the unified model for predicting particle settling velocity. This paper is the first study of annulus wall effects based on the geometric continuity concept and the unified model presented here will provide theoretical guidance for improved hydraulic design of cuttings transport, proppant placement and sand management operations.Keywords: wall effect, particle settling velocity, cuttings transport, proppant transport in fracture
Procedia PDF Downloads 161743 An Investigation on Interactions between Social Security with Police Operation and Economics in the Field of Tourism
Authors: Mohammad Mahdi Namdari, Hosein Torki
Abstract:
Security as an abstract concept, has involved human being from the beginning of creation to the present, and certainly to the future. Accordingly, battles, conflicts, challenges, legal proceedings, crimes and all issues related to human kind are associated with this concept. Today by interviewing people about their life, the security of societies and Social crimes are interviewed too. Along with the security as an infrastructure and vital concept, the economy and related issues e.g. welfare, per capita income, total government revenue, export, import and etc. is considered another infrastructure and vital concept. These two vital concepts (Security and Economic) have linked together complexly and significantly. The present study employs analytical-descriptive research method using documents and Statistics of official sources. Discovery and explanation of this mutual connection are comprising a profound and extensive research; so management, development and reform in system and relationships of the scope of this two concepts are complex and difficult. Tourism and its position in today's economy is one of the main pillars of the economy of the 21st century that maybe associate with the security and social crimes more than other pillars. Like all human activities, economy of societies and partially tourism dependent on security especially in the public and social security. On the other hand, the true economic development (generally) and the growth of the tourism industry (dedicated) are a security generating and supporting for it, because a dynamic economic infrastructure prevents the formation of centers of crime and illegal activities by providing a context for socio-economic development for all segments of society in a fair and humane. This relationship is a formula of the complexity between the two concept of economy and security. Police as a revealed or people-oriented organization in the field of security directly has linked with the economy of a community and is very effective In the face of the tourism industry. The relationship between security and national crime index, and economic indicators especially ones related to tourism is confirming above discussion that is notable. According to understanding processes about security and economic as two key and vital concepts are necessary and significant for sovereignty of governments.Keywords: economic, police, tourism, social security
Procedia PDF Downloads 321742 Risk-Sharing Financing of Islamic Banks: Better Shielded against Interest Rate Risk
Authors: Mirzet SeHo, Alaa Alaabed, Mansur Masih
Abstract:
In theory, risk-sharing-based financing (RSF) is considered a corner stone of Islamic finance. It is argued to render Islamic banks more resilient to shocks. In practice, however, this feature of Islamic financial products is almost negligible. Instead, debt-based instruments, with conventional like features, have overwhelmed the nascent industry. In addition, the framework of present-day economic, regulatory and financial reality inevitably exposes Islamic banks in dual banking systems to problems of conventional banks. This includes, but is not limited to, interest rate risk. Empirical evidence has, thus far, confirmed such exposures, despite Islamic banks’ interest-free operations. This study applies system GMM in modeling the determinants of RSF, and finds that RSF is insensitive to changes in interest rates. Hence, our results provide support to the “stability” view of risk-sharing-based financing. This suggests RSF as the way forward for risk management at Islamic banks, in the absence of widely acceptable Shariah compliant hedging instruments. Further support to the stability view is given by evidence of counter-cyclicality. Unlike debt-based lending that inflates artificial asset bubbles through credit expansion during the upswing of business cycles, RSF is negatively related to GDP growth. Our results also imply a significantly strong relationship between risk-sharing deposits and RSF. However, the pass-through of these deposits to RSF is economically low. Only about 40% of risk-sharing deposits are channeled to risk-sharing financing. This raises questions on the validity of the industry’s claim that depositors accustomed to conventional banking shun away from risk sharing and signals potential for better balance sheet management at Islamic banks. Overall, our findings suggest that, on the one hand, Islamic banks can gain ‘independence’ from conventional banks and interest rates through risk-sharing products, the potential for which is enormous. On the other hand, RSF could enable policy makers to improve systemic stability and restrain excessive credit expansion through its countercyclical features.Keywords: Islamic banks, risk-sharing, financing, interest rate, dynamic system GMM
Procedia PDF Downloads 316741 Construction and Analysis of Tamazight (Berber) Text Corpus
Authors: Zayd Khayi
Abstract:
This paper deals with the construction and analysis of the Tamazight text corpus. The grammatical structure of the Tamazight remains poorly understood, and a lack of comparative grammar leads to linguistic issues. In order to fill this gap, even though it is small, by constructed the diachronic corpus of the Tamazight language, and elaborated the program tool. In addition, this work is devoted to constructing that tool to analyze the different aspects of the Tamazight, with its different dialects used in the north of Africa, specifically in Morocco. It also focused on three Moroccan dialects: Tamazight, Tarifiyt, and Tachlhit. The Latin version was good choice because of the many sources it has. The corpus is based on the grammatical parameters and features of that language. The text collection contains more than 500 texts that cover a long historical period. It is free, and it will be useful for further investigations. The texts were transformed into an XML-format standardization goal. The corpus counts more than 200,000 words. Based on the linguistic rules and statistical methods, the original user interface and software prototype were developed by combining the technologies of web design and Python. The corpus presents more details and features about how this corpus provides users with the ability to distinguish easily between feminine/masculine nouns and verbs. The interface used has three languages: TMZ, FR, and EN. Selected texts were not initially categorized. This work was done in a manual way. Within corpus linguistics, there is currently no commonly accepted approach to the classification of texts. Texts are distinguished into ten categories. To describe and represent the texts in the corpus, we elaborated the XML structure according to the TEI recommendations. Using the search function may provide us with the types of words we would search for, like feminine/masculine nouns and verbs. Nouns are divided into two parts. The gender in the corpus has two forms. The neutral form of the word corresponds to masculine, while feminine is indicated by a double t-t affix (the prefix t- and the suffix -t), ex: Tarbat (girl), Tamtut (woman), Taxamt (tent), and Tislit (bride). However, there are some words whose feminine form contains only the prefix t- and the suffix –a, ex: Tasa (liver), tawja (family), and tarwa (progenitors). Generally, Tamazight masculine words have prefixes that distinguish them from other words. For instance, 'a', 'u', 'i', ex: Asklu (tree), udi (cheese), ighef (head). Verbs in the corpus are for the first person singular and plural that have suffixes 'agh','ex', 'egh', ex: 'ghrex' (I study), 'fegh' (I go out), 'nadagh' (I call). The program tool permits the following characteristics of this corpus: list of all tokens; list of unique words; lexical diversity; realize different grammatical requests. To conclude, this corpus has only focused on a small group of parts of speech in Tamazight language verbs, nouns. Work is still on the adjectives, prounouns, adverbs and others.Keywords: Tamazight (Berber) language, corpus linguistic, grammar rules, statistical methods
Procedia PDF Downloads 66740 Review of Life-Cycle Analysis Applications on Sustainable Building and Construction Sector as Decision Support Tools
Abstract:
Considering the environmental issues generated by the building sector for its energy consumption, solid waste generation, water use, land use, and global greenhouse gas (GHG) emissions, this review pointed out to LCA as a decision-support tool to substantially improve the sustainability in the building and construction industry. The comprehensiveness and simplicity of LCA make it one of the most promising decision support tools for the sustainable design and construction of future buildings. This paper contains a comprehensive review of existing studies related to LCAs with a focus on their advantages and limitations when applied in the building sector. The aim of this paper is to enhance the understanding of a building life-cycle analysis, thus promoting its application for effective, sustainable building design and construction in the future. Comparisons and discussions are carried out between four categories of LCA methods: building material and component combinations (BMCC) vs. the whole process of construction (WPC) LCA,attributional vs. consequential LCA, process-based LCA vs. input-output (I-O) LCA, traditional vs. hybrid LCA. Classical case studies are presented, which illustrate the effectiveness of LCA as a tool to support the decisions of practitioners in the design and construction of sustainable buildings. (i) BMCC and WPC categories of LCA researches tend to overlap with each other, as majority WPC LCAs are actually developed based on a bottom-up approach BMCC LCAs use. (ii) When considering the influence of social and economic factors outside the proposed system by research, a consequential LCA could provide a more reliable result than an attributional LCA. (iii) I-O LCA is complementary to process-based LCA in order to address the social and economic problems generated by building projects. (iv) Hybrid LCA provides a more superior dynamic perspective than a traditional LCA that is criticized for its static view of the changing processes within the building’s life cycle. LCAs are still being developed to overcome their limitations and data shortage (especially data on the developing world), and the unification of LCA methods and data can make the results of building LCA more comparable and consistent across different studies or even countries.Keywords: decision support tool, life-cycle analysis, LCA tools and data, sustainable building design
Procedia PDF Downloads 121739 Strategies for Public Space Utilization
Authors: Ben Levenger
Abstract:
Social life revolves around a central meeting place or gathering space. It is where the community integrates, earns social skills, and ultimately becomes part of the community. Following this premise, public spaces are one of the most important spaces that downtowns offer, providing locations for people to be witnessed, heard, and most importantly, seamlessly integrate into the downtown as part of the community. To facilitate this, these local spaces must be envisioned and designed to meet the changing needs of a downtown, offering a space and purpose for everyone. This paper will dive deep into analyzing, designing, and implementing public space design for small plazas or gathering spaces. These spaces often require a detailed level of study, followed by a broad stroke of design implementation, allowing for adaptability. This paper will highlight how to assess needs, define needed types of spaces, outline a program for spaces, detail elements of design to meet the needs, assess your new space, and plan for change. This study will provide participants with the necessary framework for conducting a grass-roots-level assessment of public space and programming, including short-term and long-term improvements. Participants will also receive assessment tools, sheets, and visual representation diagrams. Urbanism, for the sake of urbanism, is an exercise in aesthetic beauty. An economic improvement or benefit must be attained to solidify these efforts' purpose further and justify the infrastructure or construction costs. We will deep dive into case studies highlighting economic impacts to ground this work in quantitative impacts. These case studies will highlight the financial impact on an area, measuring the following metrics: rental rates (per sq meter), tax revenue generation (sales and property), foot traffic generation, increased property valuations, currency expenditure by tenure, clustered development improvements, cost/valuation benefits of increased density in housing. The economic impact results will be targeted by community size, measuring in three tiers: Sub 10,000 in population, 10,001 to 75,000 in population, and 75,000+ in population. Through this classification breakdown, the participants can gauge the impact in communities similar to their work or for which they are responsible. Finally, a detailed analysis of specific urbanism enhancements, such as plazas, on-street dining, pedestrian malls, etc., will be discussed. Metrics that document the economic impact of each enhancement will be presented, aiding in the prioritization of improvements for each community. All materials, documents, and information will be available to participants via Google Drive. They are welcome to download the data and use it for their purposes.Keywords: downtown, economic development, planning, strategic
Procedia PDF Downloads 81738 Human Values and Morality of Adolescents Who Have Broken the Law: A Multi-Method Study in a Socioeducational Institutional Environment
Authors: Luiz Nolasco Jr. Rezende, Antonio Villar M. Sá, Claudia Marcia L. Pato
Abstract:
The increasing urban violence in Brazil involves more and more infractions committed by children and youths. The challenges faced by the institutional environments responsible for the education and resocialization of adolescents in conflict with the law are enormous, especially of those deprived of their liberty. These institutions have an inadequate educational structure. They are characterized by a dirty and unhealthy environment without the minimum basic conditions for their activities, by frequent practices of degradation, humiliation, and the physical and psychological punishment of inmates. This mixed-method study investigated the personal values of adolescents with restriction of freedom in a socio-educational institutional environment aiming to contribute to the development of their morality through an educational process. For that, we used a survey and transdisciplinary play workshops involving thirty-two boys aged between 15 and 19 years old and at least two years out of school. To evaluate the survey the reduced version of the Portrait Questionnaire—PQ21—was used. The workshops happened once a week, lasting 80 minutes each, totaling twelve meetings. By using the game of chess and its metaphors, participants produced texts and engaged in critical brainstorming about their lives. The survey results pointed out that these young people showed a predominance of values of openness to change and self-transcendence, dissatisfaction with one's own reality and surroundings, not considering the consequences of their actions on themselves and others, difficulties in speaking and writing, and desire for changes in their lives. After the pedagogical interventions, these adolescents demonstrated an understanding of the implications of their actions for themselves, for their families, especially for the mothers, with whom they demonstrated stronger bonds. It was possible to observe evidence of improvement in the capacity of linguistic expression, more autonomy and critical vision, including about themselves and their respective contexts. These results demonstrated the educational potential of lively, symbolic, dynamic and playful activities that favor the mediation and identification of these adolescents with their lives, and contribute to the projection of dreams.Keywords: adolescents arrested, human values, moral development, playful workshops
Procedia PDF Downloads 265737 Modelling and Simulation of Hysteresis Current Controlled Single-Phase Grid-Connected Inverter
Authors: Evren Isen
Abstract:
In grid-connected renewable energy systems, input power is controlled by AC/DC converter or/and DC/DC converter depending on output voltage of input source. The power is injected to DC-link, and DC-link voltage is regulated by inverter controlling the grid current. Inverter performance is considerable in grid-connected renewable energy systems to meet the utility standards. In this paper, modelling and simulation of hysteresis current controlled single-phase grid-connected inverter that is utilized in renewable energy systems, such as wind and solar systems, are presented. 2 kW single-phase grid-connected inverter is simulated in Simulink and modeled in Matlab-m-file. The grid current synchronization is obtained by phase locked loop (PLL) technique in dq synchronous rotating frame. Although dq-PLL can be easily implemented in three-phase systems, there is difficulty to generate β component of grid voltage in single-phase system because single-phase grid voltage exists. Inverse-Park PLL with low-pass filter is used to generate β component for grid angle determination. As grid current is controlled by constant bandwidth hysteresis current control (HCC) technique, average switching frequency and variation of switching frequency in a fundamental period are considered. 3.56% total harmonic distortion value of grid current is achieved with 0.5 A bandwidth. Average value of switching frequency and total harmonic distortion curves for different hysteresis bandwidth are obtained from model in m-file. Average switching frequency is 25.6 kHz while switching frequency varies between 14 kHz-38 kHz in a fundamental period. The average and maximum frequency difference should be considered for selection of solid state switching device, and designing driver circuit. Steady-state and dynamic response performances of the inverter depending on the input power are presented with waveforms. The control algorithm regulates the DC-link voltage by adjusting the output power.Keywords: grid-connected inverter, hysteresis current control, inverter modelling, single-phase inverter
Procedia PDF Downloads 479736 Study on Control Techniques for Adaptive Impact Mitigation
Authors: Rami Faraj, Cezary Graczykowski, Błażej Popławski, Grzegorz Mikułowski, Rafał Wiszowaty
Abstract:
Progress in the field of sensors, electronics and computing results in more and more often applications of adaptive techniques for dynamic response mitigation. When it comes to systems excited with mechanical impacts, the control system has to take into account the significant limitations of actuators responsible for system adaptation. The paper provides a comprehensive discussion of the problem of appropriate design and implementation of adaptation techniques and mechanisms. Two case studies are presented in order to compare completely different adaptation schemes. The first example concerns a double-chamber pneumatic shock absorber with a fast piezo-electric valve and parameters corresponding to the suspension of a small unmanned aerial vehicle, whereas the second considered system is a safety air cushion applied for evacuation of people from heights during a fire. For both systems, it is possible to ensure adaptive performance, but a realization of the system’s adaptation is completely different. The reason for this is technical limitations corresponding to specific types of shock-absorbing devices and their parameters. Impact mitigation using a pneumatic shock absorber corresponds to much higher pressures and small mass flow rates, which can be achieved with minimal change of valve opening. In turn, mass flow rates in safety air cushions relate to gas release areas counted in thousands of sq. cm. Because of these facts, both shock-absorbing systems are controlled based on completely different approaches. Pneumatic shock-absorber takes advantage of real-time control with valve opening recalculated at least every millisecond. In contrast, safety air cushion is controlled using the semi-passive technique, where adaptation is provided using prediction of the entire impact mitigation process. Similarities of both approaches, including applied models, algorithms and equipment, are discussed. The entire study is supported by numerical simulations and experimental tests, which prove the effectiveness of both adaptive impact mitigation techniques.Keywords: adaptive control, adaptive system, impact mitigation, pneumatic system, shock-absorber
Procedia PDF Downloads 90735 Applying Miniaturized near Infrared Technology for Commingled and Microplastic Waste Analysis
Authors: Monika Rani, Claudio Marchesi, Stefania Federici, Laura E. Depero
Abstract:
Degradation of the aquatic environment by plastic litter, especially microplastics (MPs), i.e., any water-insoluble solid plastic particle with the longest dimension in the range 1µm and 1000 µm (=1 mm) size, is an unfortunate indication of the advancement of the Anthropocene age on Earth. Microplastics formed due to natural weathering processes are termed as secondary microplastics, while when these are synthesized in industries, they are called primary microplastics. Their presence from the highest peaks to the deepest points in oceans explored and their resistance to biological and chemical decay has adversely affected the environment, especially marine life. Even though the presence of MPs in the marine environment is well-reported, a legitimate and authentic analytical technique to sample, analyze, and quantify the MPs is still under progress and testing stages. Among the characterization techniques, vibrational spectroscopic techniques are largely adopted in the field of polymers. And the ongoing miniaturization of these methods is on the way to revolutionize the plastic recycling industry. In this scenario, the capability and the feasibility of a miniaturized near-infrared (MicroNIR) spectroscopy combined with chemometrics tools for qualitative and quantitative analysis of urban plastic waste collected from a recycling plant and microplastic mixture fragmented in the lab were investigated. Based on the Resin Identification Code, 250 plastic samples were used for macroplastic analysis and to set up a library of polymers. Subsequently, MicroNIR spectra were analysed through the application of multivariate modelling. Principal Components Analysis (PCA) was used as an unsupervised tool to find trends within the data. After the exploratory PCA analysis, a supervised classification tool was applied in order to distinguish the different plastic classes, and a database containing the NIR spectra of polymers was made. For the microplastic analysis, the three most abundant polymers in the plastic litter, PE, PP, PS, were mechanically fragmented in the laboratory to micron size. The distinctive arrangement of blends of these three microplastics was prepared in line with a designed ternary composition plot. After the PCA exploratory analysis, a quantitative model Partial Least Squares Regression (PLSR) allowed to predict the percentage of microplastics in the mixtures. With a complete dataset of 63 compositions, PLS was calibrated with 42 data-points. The model was used to predict the composition of 21 unknown mixtures of the test set. The advantage of the consolidated NIR Chemometric approach lies in the quick evaluation of whether the sample is macro or micro, contaminated, coloured or not, and with no sample pre-treatment. The technique can be utilized with bigger example volumes and even considers an on-site evaluation and in this manner satisfies the need for a high-throughput strategy.Keywords: chemometrics, microNIR, microplastics, urban plastic waste
Procedia PDF Downloads 165734 The Association of Vitamin B12 with Body Weight-and Fat-Based Indices in Childhood Obesity
Authors: Mustafa Metin Donma, Orkide Donma
Abstract:
Vitamin deficiencies are common in obese individuals. Particularly, the status of vitamin B12 and its association with vitamin B9 (folate) and vitamin D is under investigation in recent time. Vitamin B12 is closely related to many vital processes in the body. In clinical studies, its involvement in fat metabolism draws attention from the obesity point of view. Obesity, in its advanced stages and in combination with metabolic syndrome (MetS) findings, may be a life-threatening health problem. Pediatric obesity is particularly important because it may be a predictor of severe chronic diseases during the adulthood period of the child. Due to its role in fat metabolism, vitamin B12 deficiency may disrupt metabolic pathways of the lipid and energy metabolisms in the body. The association of low B12 levels with obesity degree may be an interesting topic to be investigated. Obesity indices may be helpful at this point. Weight- and fat-based indices are available. Of them, body mass index (BMI) is in the first group. Fat mass index (FMI), fat-free mass index (FFMI) and diagnostic obesity notation model assessment-II (D2I) index lie in the latter group. The aim of this study is to clarify possible associations between vitamin B12 status and obesity indices in the pediatric population. The study comprises a total of one hundred and twenty-two children. Thirty-two children were included in the normal body mass index (N-BMI) group. Forty-six and forty-four children constitute groups with morbid obese children without MetS and with MetS, respectively. Informed consent forms and the approval of the institutional ethics committee were obtained. Tables prepared for obesity classification by World Health Organization were used. Metabolic syndrome criteria were defined. Anthropometric and blood pressure measurements were taken. Body mass index, FMI, FFMI, D2I were calculated. Routine laboratory tests were performed. Vitamin B9, B12, D concentrations were determined. Statistical evaluation of the study data was performed. Vitamin B9 and vitamin D levels were reduced in MetS group compared to children with N-BMI (p>0.05). Significantly lower values were observed in vitamin B12 concentrations of MetS group (p<0.01). Upon evaluation of blood pressure as well as triglyceride levels, there exist significant increases in morbid obese children. Significantly decreased concentrations of high density lipoprotein cholesterol were observed. All of the obesity indices and insulin resistance index exhibit increasing tendency with the severity of obesity. Inverse correlations were calculated between vitamin D and insulin resistance index as well as vitamin B12 and D2I in morbid obese groups. In conclusion, a fat-based index, D2I, was the most prominent body index, which shows a strong correlation with vitamin B12 concentrations in the late stage of obesity in children. A negative correlation between these two parameters was a confirmative finding related to the association between vitamin B12 and obesity degree.Keywords: body mass index, children, D2I index, fat mass index, obesity
Procedia PDF Downloads 206733 Ficus Microcarpa Fruit Derived Iron Oxide Nanomaterials and Its Anti-bacterial, Antioxidant and Anticancer Efficacy
Authors: Fuad Abdullah Alatawi
Abstract:
Microbial infections-based diseases are a significant public health issue around the world, mainly when antibiotic-resistant bacterium types evolve. In this research, we explored the anti-bacterial and anti-cancer potency of iron-oxide (Fe₂O₃) nanoparticles prepared from F. macrocarpa fruit extract. The chemical composition of F. macrocarpa fruit extract was used as a reducing and capping agent for nanoparticles’ synthesis was examined by GC-MS/MS analysis. Then, the prepared nanoparticles were confirmed by various biophysical techniques, including X-ray powder diffraction (XRD), Fourier-transform infrared spectroscopy (FTIR), UV-Vis Spectroscopy, and Transmission Electron Microscopy (TEM) and Energy Dispersive Spectroscopy (EDAX), and Dynamic Light Scattering (DLS). Also, the antioxidant capacity of fruit extract was determined through 2,2-diphenyl-1-picrylhydrazyl (DPPH), 2,2'-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid (ABTS), Fluorescence Recovery After Photobleaching (FRAP), Superoxide Dismutase (SOD) assays. Furthermore, the cytotoxicity activities of Fe₂O₃ NPs were determined using the (3-(4, 5-dimethylthiazolyl-2)-2, 5-diphenyltetrazolium bromide) (MTT) test on MCF-7 cells. In the antibacterial assay, lethal doses of the Fe₂O₃NPs effectively inhibited the growth of gram-negative and gram-positive bacteria. The surface damage, ROS production, and protein leakage are the antibacterial mechanisms of Fe₂O₃NPs. Concerning antioxidant activity, the fruit extracts of F. macrocarpa had strong antioxidant properties, which were confirmed by DPPH, ABTS, FRAP, and SOD assays. In addition, the F. microcarpa-derived iron oxide nanomaterials greatly reduced the cell viability of (MCF-7). The GC-MS/MS analysis revealed the presence of 25 main bioactive compounds in the F. microcarpa extract. Overall, the finding of this research revealed that F. microcarpa-derived Fe₂O₃ nanoparticles could be employed as an alternative therapeutic agent to cure microbial infection and breast cancer in humans.Keywords: ficus microcarpa, iron oxide, antibacterial activity, cytotoxicity
Procedia PDF Downloads 121732 Seismic Isolation of Existing Masonry Buildings: Recent Case Studies in Italy
Authors: Stefano Barone
Abstract:
Seismic retrofit of buildings through base isolation represents a consolidated protection strategy against earthquakes. It consists in decoupling the ground motion from that of the structure and introducing anti-seismic devices at the base of the building, characterized by high horizontal flexibility and medium/high dissipative capacity. This allows to protect structural elements and to limit damages to non-structural ones. For these reasons, full functionality is guaranteed after an earthquake event. Base isolation is applied extensively to both new and existing buildings. For the latter, it usually does not require any interruption of the structure use and occupants evacuation, a special advantage for strategic buildings such as schools, hospitals, and military buildings. This paper describes the application of seismic isolation to three existing masonry buildings in Italy: Villa “La Maddalena” in Macerata (Marche region), “Giacomo Matteotti” and “Plinio Il Giovane” school buildings in Perugia (Umbria region). The seismic hazard of the sites is characterized by a Peak Ground Acceleration (PGA) of 0.213g-0.287g for the Life Safety Limit State and between 0.271g-0.359g for the Collapse Limit State. All the buildings are isolated with a combination of free sliders type TETRON® CD with confined elastomeric disk and anti-seismic rubber isolators type ISOSISM® HDRB to reduce the eccentricity between the center of mass and stiffness, thus limiting torsional effects during a seismic event. The isolation systems are designed to lengthen the original period of vibration (i.e., without isolators) by at least three times and to guarantee medium/high levels of energy dissipation capacity (equivalent viscous damping between 12.5% and 16%). This allows the structures to resist 100% of the seismic design action. This article shows the performances of the supplied anti-seismic devices with particular attention to the experimental dynamic response. Finally, a special focus is given to the main site activities required to isolate a masonry building.Keywords: retrofit, masonry buildings, seismic isolation, energy dissipation, anti-seismic devices
Procedia PDF Downloads 72731 A Machine Learning Model for Dynamic Prediction of Chronic Kidney Disease Risk Using Laboratory Data, Non-Laboratory Data, and Metabolic Indices
Authors: Amadou Wurry Jallow, Adama N. S. Bah, Karamo Bah, Shih-Ye Wang, Kuo-Chung Chu, Chien-Yeh Hsu
Abstract:
Chronic kidney disease (CKD) is a major public health challenge with high prevalence, rising incidence, and serious adverse consequences. Developing effective risk prediction models is a cost-effective approach to predicting and preventing complications of chronic kidney disease (CKD). This study aimed to develop an accurate machine learning model that can dynamically identify individuals at risk of CKD using various kinds of diagnostic data, with or without laboratory data, at different follow-up points. Creatinine is a key component used to predict CKD. These models will enable affordable and effective screening for CKD even with incomplete patient data, such as the absence of creatinine testing. This retrospective cohort study included data on 19,429 adults provided by a private research institute and screening laboratory in Taiwan, gathered between 2001 and 2015. Univariate Cox proportional hazard regression analyses were performed to determine the variables with high prognostic values for predicting CKD. We then identified interacting variables and grouped them according to diagnostic data categories. Our models used three types of data gathered at three points in time: non-laboratory, laboratory, and metabolic indices data. Next, we used subgroups of variables within each category to train two machine learning models (Random Forest and XGBoost). Our machine learning models can dynamically discriminate individuals at risk for developing CKD. All the models performed well using all three kinds of data, with or without laboratory data. Using only non-laboratory-based data (such as age, sex, body mass index (BMI), and waist circumference), both models predict chronic kidney disease as accurately as models using laboratory and metabolic indices data. Our machine learning models have demonstrated the use of different categories of diagnostic data for CKD prediction, with or without laboratory data. The machine learning models are simple to use and flexible because they work even with incomplete data and can be applied in any clinical setting, including settings where laboratory data is difficult to obtain.Keywords: chronic kidney disease, glomerular filtration rate, creatinine, novel metabolic indices, machine learning, risk prediction
Procedia PDF Downloads 105730 Architectural Design Strategies and Visual Perception of Contemporary Spatial Design
Authors: Nora Geczy
Abstract:
In today’s architectural practice, during the process of designing public, educational, healthcare and cultural space, human-centered architectural designs helping spatial orientation, safe space usage and the appropriate spatial sequence of actions are gaining increasing importance. Related to the methodology of designing public buildings, several scientific experiments in spatial recognition, spatial analysis and spatial psychology with regard to the components of space producing mental and physiological effects have been going on at the Department of Architectural Design and the Interdisciplinary Student Workshop (IDM) at the Széchenyi István University, Győr since 2013. Defining the creation of preventive, anticipated spatial design and the architectural tools of spatial comfort of public buildings and their practical usability are in the limelight of our research. In the experiments applying eye-tracking cameras, we studied the way public spaces are used, especially concentrating on the characteristics of spatial behaviour, orientation, recognition, the sequence of actions, and space usage. Along with the role of mental maps, human perception, and interaction problems in public spaces (at railway stations, galleries, and educational institutions), we analyzed the spatial situations influencing psychological and ergonomic factors. We also analyzed the eye movements of the experimental subjects in dynamic situations, in spatial procession, using stairs and corridors. We monitored both the consequences and the distorting effects of the ocular dominance of the right eye on spatial orientation; we analyzed the gender-based differences of women and men’s orientation, stress-inducing spaces, spaces affecting concentration and the spatial situation influencing territorial behaviour. Based on these observations, we collected the components of creating public interior spaces, which -according to our theory- contribute to the optimal usability of public spaces. We summed up our research in criteria for design, including 10 points. Our further goals are testing design principles needed for optimizing orientation and space usage, their discussion, refinement, and practical usage.Keywords: architecture, eye-tracking, human-centered spatial design, public interior spaces, visual perception
Procedia PDF Downloads 111729 Developing Manufacturing Process for the Graphene Sensors
Authors: Abdullah Faqihi, John Hedley
Abstract:
Biosensors play a significant role in the healthcare sectors, scientific and technological progress. Developing electrodes that are easy to manufacture and deliver better electrochemical performance is advantageous for diagnostics and biosensing. They can be implemented extensively in various analytical tasks such as drug discovery, food safety, medical diagnostics, process controls, security and defence, in addition to environmental monitoring. Development of biosensors aims to create high-performance electrochemical electrodes for diagnostics and biosensing. A biosensor is a device that inspects the biological and chemical reactions generated by the biological sample. A biosensor carries out biological detection via a linked transducer and transmits the biological response into an electrical signal; stability, selectivity, and sensitivity are the dynamic and static characteristics that affect and dictate the quality and performance of biosensors. In this research, a developed experimental study for laser scribing technique for graphene oxide inside a vacuum chamber for processing of graphene oxide is presented. The processing of graphene oxide (GO) was achieved using the laser scribing technique. The effect of the laser scribing on the reduction of GO was investigated under two conditions: atmosphere and vacuum. GO solvent was coated onto a LightScribe DVD. The laser scribing technique was applied to reduce GO layers to generate rGO. The micro-details for the morphological structures of rGO and GO were visualised using scanning electron microscopy (SEM) and Raman spectroscopy so that they could be examined. The first electrode was a traditional graphene-based electrode model, made under normal atmospheric conditions, whereas the second model was a developed graphene electrode fabricated under a vacuum state using a vacuum chamber. The purpose was to control the vacuum conditions, such as the air pressure and the temperature during the fabrication process. The parameters to be assessed include the layer thickness and the continuous environment. Results presented show high accuracy and repeatability achieving low cost productivity.Keywords: laser scribing, lightscribe DVD, graphene oxide, scanning electron microscopy
Procedia PDF Downloads 122728 Soil Liquefaction Hazard Evaluation for Infrastructure in the New Bejaia Quai, Algeria
Authors: Mohamed Khiatine, Amal Medjnoun, Ramdane Bahar
Abstract:
The North Algeria is a highly seismic zone, as evidenced by the historical seismicity. During the past two decades, it has experienced several moderate to strong earthquakes. Therefore, the geotechnical engineering problems that involve dynamic loading of soils and soil-structure interaction system requires, in the presence of saturated loose sand formations, liquefaction studies. Bejaia city, located in North-East of Algiers, Algeria, is a part of the alluvial plain which covers an area of approximately 750 hectares. According to the Algerian seismic code, it is classified as moderate seismicity zone. This area had not experienced in the past urban development because of the different hazards identified by hydraulic and geotechnical studies conducted in the region. The low bearing capacity of the soil, its high compressibility and the risk of liquefaction and flooding are among these risks and are a constraint on urbanization. In this area, several cases of structures founded on shallow foundations have suffered damages. Hence, the soils need treatment to reduce the risk. Many field and laboratory investigations, core drilling, pressuremeter test, standard penetration test (SPT), cone penetrometer test (CPT) and geophysical down hole test, were performed in different locations of the area. The major part of the area consists of silty fine sand , sometimes heterogeneous, has not yet reached a sufficient degree of consolidation. The ground water depth changes between 1.5 and 4 m. These investigations show that the liquefaction phenomenon is one of the critical problems for geotechnical engineers and one of the obstacles found in design phase of projects. This paper presents an analysis to evaluate the liquefaction potential, using the empirical methods based on Standard Penetration Test (SPT), Cone Penetration Test (CPT) and shear wave velocity and numerical analysis. These liquefaction assessment procedures indicate that liquefaction can occur to considerable depths in silty sand of harbor zone of Bejaia.Keywords: earthquake, modeling, liquefaction potential, laboratory investigations
Procedia PDF Downloads 353727 Structural and Functional Comparison of Untagged and Tagged EmrE Protein
Authors: S. Junaid S. Qazi, Denice C. Bay, Raymond Chew, Raymond J. Turner
Abstract:
EmrE, a member of the small multidrug resistance protein family in bacteria is considered to be the archetypical member of its family. It confers host resistance to a wide variety of quaternary cation compounds (QCCs) driven by proton motive force. Generally, purification yield is a challenge in all membrane proteins because of the difficulties in their expression, isolation and solubilization. EmrE is extremely hydrophobic which make the purification yield challenging. We have purified EmrE protein using two different approaches: organic solvent membrane extraction and hexahistidine (his6) tagged Ni-affinity chromatographic methods. We have characterized changes present between ligand affinity of untagged and his6-tagged EmrE proteins in similar membrane mimetic environments using biophysical experimental techniques. Purified proteins were solubilized in a buffer containing n-dodecyl-β-D-maltopyranoside (DDM) and the conformations in the proteins were explored in the presence of four QCCs, methyl viologen (MV), ethidium bromide (EB), cetylpyridinium chloride (CTP) and tetraphenyl phosphonium (TPP). SDS-Tricine PAGE and dynamic light scattering (DLS) analysis revealed that the addition of QCCs did not induce higher multimeric forms of either proteins at all QCC:EmrE molar ratios examined under the solubilization conditions applied. QCC binding curves obtained from the Trp fluorescence quenching spectra, gave the values of dissociation constant (Kd) and maximum specific one-site binding (Bmax). Lower Bmax values to QCCs for his6-tagged EmrE shows that the binding sites remained unoccupied. This lower saturation suggests that the his6-tagged versions provide a conformation that prevents saturated binding. Our data demonstrate that tagging an integral membrane protein can significantly influence the protein.Keywords: small multidrug resistance (SMR) protein, EmrE, integral membrane protein folding, quaternary ammonium compounds (QAC), quaternary cation compounds (QCC), nickel affinity chromatography, hexahistidine (His6) tag
Procedia PDF Downloads 379726 Effects of Robot-Assisted Hand Training on Upper Extremity Performance in Patients with Stroke: A Randomized Crossover Controlled, Assessor-Blinded Study
Authors: Hsin-Chieh Lee, Fen-Ling Kuo, Jui-Chi Lin
Abstract:
Background: Upper extremity functional impairment that occurs after stroke includes hemiplegia, synergy movement, muscle hypertonicity, and somatosensory impairment, which result in inefficient and inaccurate movement. Robot-assisted rehabilitation is an intensive training approach that is effective in sensorimotor and hand function recovery. However, these systems mostly focused on the proximal part of the upper limb rather than the distal part. The device used in our study was Gloreha Sinfonia, which focuses on the distal part of the upper limb and uses a dynamic support system to facilitate the whole limb function. The objective of this study was to investigate the effects of robot-assisted therapy (RT) with Gloreha device on sensorimotor, and ADLs in patients with stroke. Method: Patients with stroke (N=25) participated AB or BA (A = 12 RT sessions and B = 12 conventional therapy (CT) sessions) for 6 weeks (60 min at each session, twice a week), with 1-month break for washout period. The performance of the patients was assessed by a blinded assessor at 4 time points (pretest 1, posttest 1, pretest 2, posttest 2) which including the Fugl–Meyer Assessment-upper extremity (FMA-UE), box and block test, electromyography of the extensor digitorum communis (EDC) and brachioradialis, a grip dynamometer for motor evaluation; Semmes–Weinstein hand monofilament and Revision of the Nottingham Sensory Assessment for sensory evaluation; and the Modified Barthel Index (MBI) for assessing the ADL ability. Result: RT group significantly improved FMA-UE proximal scores (p = 0.038), FMA-UE total scores (p = 0.046), and MBI (p = 0.030). The EDC exhibited higher efficiency during the small block grasping task in the RT group than in the CT group (p = 0.050). Conclusions: RT with the Gloreha device might lead to beneficial effects on arm motor function, ADL ability, and EDC muscle recruitment efficacy in patients with subacute to chronic stroke.Keywords: activities of daily living, hand function, robotic rehabilitation, stroke
Procedia PDF Downloads 118725 SynKit: A Event-Driven and Scalable Microservices-Based Kitting System
Authors: Bruno Nascimento, Cristina Wanzeller, Jorge Silva, João A. Dias, André Barbosa, José Ribeiro
Abstract:
The increasing complexity of logistics operations stems from evolving business needs, such as the shift from mass production to mass customization, which demands greater efficiency and flexibility. In response, Industry 4.0 and 5.0 technologies provide improved solutions to enhance operational agility and better meet market demands. The management of kitting zones, combined with the use of Autonomous Mobile Robots, faces challenges related to coordination, resource optimization, and rapid response to customer demand fluctuations. Additionally, implementing lean manufacturing practices in this context must be carefully orchestrated by intelligent systems and human operators to maximize efficiency without sacrificing the agility required in an advanced production environment. This paper proposes and implements a microservices-based architecture integrating principles from Industry 4.0 and 5.0 with lean manufacturing practices. The architecture enhances communication and coordination between autonomous vehicles and kitting management systems, allowing more efficient resource utilization and increased scalability. The proposed architecture focuses on the modularity and flexibility of operations, enabling seamless flexibility to change demands and the efficient allocation of resources in realtime. Conducting this approach is expected to significantly improve logistics operations’ efficiency and scalability by reducing waste and optimizing resource use while improving responsiveness to demand changes. The implementation of this architecture provides a robust foundation for the continuous evolution of kitting management and process optimization. It is designed to adapt to dynamic environments marked by rapid shifts in production demands and real-time decision-making. It also ensures seamless integration with automated systems, aligning with Industry 4.0 and 5.0 needs while reinforcing Lean Manufacturing principles.Keywords: microservices, event-driven, kitting, AMR, lean manufacturing, industry 4.0, industry 5.0
Procedia PDF Downloads 24724 Applications and Development of a Plug Load Management System That Automatically Identifies the Type and Location of Connected Devices
Authors: Amy Lebar, Kim L. Trenbath, Bennett Doherty, William Livingood
Abstract:
Plug and process loads (PPLs) account for 47% of U.S. commercial building energy use. There is a huge potential to reduce whole building consumption by targeting PPLs for energy savings measures or implementing some form of plug load management (PLM). Despite this potential, there has yet to be a widely adopted commercial PLM technology. This paper describes the Automatic Type and Location Identification System (ATLIS), a PLM system framework with automatic and dynamic load detection (ADLD). ADLD gives PLM systems the ability to automatically identify devices as they are plugged into the outlets of a building. The ATLIS framework takes advantage of smart, connected devices to identify device locations in a building, meter and control their power, and communicate this information to a central database. ATLIS includes five primary capabilities: location identification, communication, control, energy metering and data storage. A laboratory proof of concept (PoC) demonstrated all but the data storage capabilities and these capabilities were validated using an office building scenario. The PoC can identify when a device is plugged into an outlet and the location of the device in the building. When a device is moved, the PoC’s dashboard and database are automatically updated with the new location. The PoC implements controls to devices from the system dashboard so that devices maintain correct schedules regardless of where they are plugged in within a building. ATLIS’s primary technology application is improved PLM, but other applications include asset management, energy audits, and interoperability for grid-interactive efficient buildings. A system like ATLIS could also be used to direct power to critical devices, such as ventilators, during a brownout or blackout. Such a framework is an opportunity to make PLM more widespread and reduce the amount of energy consumed by PPLs in current and future commercial buildings.Keywords: commercial buildings, grid-interactive efficient buildings (GEB), miscellaneous electric loads (MELs), plug loads, plug load management (PLM)
Procedia PDF Downloads 132723 Site Specific Nutrient Management Need in India Now
Authors: A. H. Nanher, N. P. Singh, Shashidhar Yadav, Sachin Tyagi
Abstract:
Agricultural production system is an outcome of a complex interaction of seed, soil, water and agro-chemicals (including fertilizers). Therefore, judicious management of all the inputs is essential for the sustainability of such a complex system. Precision agriculture gives farmers the ability to use crop inputs more effectively including fertilizers, pesticides, tillage and irrigation water. More effective use of inputs means greater crop yield and/or quality, without polluting the environment the focus on enhancing the productivity during the Green Revolution coupled with total disregard of proper management of inputs and without considering the ecological impacts, has resulted into environmental degradation. To evaluate a new approach for site-specific nutrient management (SSNM). Large variation in initial soil fertility characteristics and indigenous supply of N, P, and K was observed among Field- and season-specific NPK applications were calculated by accounting for the indigenous nutrient supply, yield targets, and nutrient demand as a function of the interactions between N, P, and K. Nitrogen applications were fine-tuned based on season-specific rules and field-specific monitoring of crop N status. The performance of SSNM did not differ significantly between high-yielding and low-yielding climatic seasons, but improved over time with larger benefits observed in the second year Future, strategies for nutrient management in intensive rice systems must become more site-specific and dynamic to manage spatially and temporally variable resources based on a quantitative understanding of the congruence between nutrient supply and crop demand. The SSNM concept has demonstrated promising agronomic and economic potential. It can be used for managing plant nutrients at any scale, i.e., ranging from a general recommendation for homogenous management of a larger domain to true management of between-field variability. Assessment of pest profiles in FFP and SSNM plots suggests that SSNM may also reduce pest incidence, particularly diseases that are often associated with excessive N use or unbalanced plant nutrition.Keywords: nutrient, pesticide, crop, yield
Procedia PDF Downloads 430722 Solid Waste and Its Impact on the Human Health
Authors: Waseem Akram, Hafiz Azhar Ali Khan
Abstract:
Unplanned urbanization together with change in life from simple to more technologically advanced style with flow of rural masses to urban areas has played a vital role in pilling loads of solid wastes in our environment. The cities and towns have expanded beyond boundaries. Even the uncontrolled population expansion has caused the overall environmental burden. Thus, today the indifference remains as one of the biggest trash that has come up due to the non-responsive behavior of the people. Everyday huge amount of solid waste is thrown in the streets, on the roads, parks, and in all those places that are frequently and often visited by the human beings. This behavior based response in many countries of the world has led to serious health concerns and environmental issues. Over 80% of our products that are sold in the market are packed in plastic bags. None of the bags are later recycled but simply become a permanent environment concern that flies, choke lines or are burnt and release toxic gases in the environment or form dumps of heaps. Lack of classification of the daily waste generated from houses and other places lead to worst clogging of the sewerage lines and formation of ponding areas which ultimately favor vector borne disease and sometimes become a cause of transmission of polio virus. Solid waste heaps were checked at different places of the cities. All of the wastes on visual assessments were classified into plastic bags, papers, broken plastic pots, clay pots, steel boxes, wrappers etc. All solid waste dumping sites in the cities and wastes that were thrown outside of the trash containers usually contained wrappers, plastic bags, and unconsumed food products. Insect populations seen in these sites included the house flies, bugs, cockroaches and mosquito larvae breeding in water filled wrappers, containers or plastic bags. The population of the mosquitoes, cockroaches and houseflies were relatively very high in dumping sites close to human population. This population has been associated with cases like dengue, malaria, dysentery, gastro and also to skin allergies during the monsoon and summer season. Thus, dumping of the huge amount of solid wastes in and near the residential areas results into serious environmental concerns, bad smell circulation, and health related issues. In some places, the same waste is burnt to get rid of mosquitoes through smoke which ultimately releases toxic material in the atmosphere. Therefore, a proper environmental strategy is needed to minimize environmental burden and promote concepts of recycled products and thus, reduce the disease burden.Keywords: solid waste accumulation, disease burden, mosquitoes, vector borne diseases
Procedia PDF Downloads 278721 Dialysis Access Surgery for Patients in Renal Failure: A 10-Year Institutional Experience
Authors: Daniel Thompson, Muhammad Peerbux, Sophie Cerutti, Hansraj Bookun
Abstract:
Introduction: Dialysis access is a key component of the care of patients with end stage renal failure. In our institution, a combined service of vascular surgeons and nephrologists are responsible for the creation and maintenance of arteriovenous fisultas (AVF), tenckhoff cathethers and Hickman/permcath lines. This poster investigates the last 10 years of dialysis access surgery conducted at St. Vincent’s Hospital Melbourne. Method: A cross-sectional retrospective analysis was conducted of patients of St. Vincent’s Hospital Melbourne (Victoria, Australia) utilising data collection from the Australasian Vascular Audit (Australian and New Zealand Society for Vascular Surgery). Descriptive demographic analysis was carried out as well as operation type, length of hospital stays, postoperative deaths and need for reoperation. Results: 2085 patients with renal failure were operated on between the years of 2011 and 2020. 1315 were male (63.1%) and 770 were female (36.9%). The mean age was 58 (SD 13.8). 92% of patients scored three or greater on the American Society of Anesthiologiests classification system. Almost half had a history of ischaemic heart disease (48.4%), more than half had a history of diabetes (64%), and a majority had hypertension (88.4%). 1784 patients had a creatinine over 150mmol/L (85.6%), the rest were on dialysis (14.4%). The most common access procedure was AVF creation, with 474 autologous AVFs and 64 prosthetic AVFs. There were 263 Tenckhoff insertions. We performed 160 cadeveric renal transplants. The most common location for AVF formation was brachiocephalic (43.88%) followed by radiocephalic (36.7%) and brachiobasilic (16.67%). Fistulas that required re-intervention were most commonly angioplastied (n=163), followed by thrombectomy (n=136). There were 107 local fistula repairs. Average length of stay was 7.6 days, (SD 12). There were 106 unplanned returns to theatre, most commonly for fistula creation, insertion of tenckhoff or permacath removal (71.7%). There were 8 deaths in the immediately postoperative period. Discussion: Access to dialysis is vital for patients with end stage kidney disease, and requires a multidisciplinary approach from both nephrologists, vascular surgeons, and allied health practitioners. Our service provides a variety of dialysis access methods, predominately fistula creation and tenckhoff insertion. Patients with renal failure are heavily comorbid, and prolonged hospital admission following surgery is a source of significant healthcare expenditure. AVFs require careful monitoring and maintenance for ongoing utility, and our data reflects a multitude of operations required to maintain usable access. The requirement for dialysis is growing worldwide and our data demonstrates a local experience in access, with preferred methods, common complications and the associated surgical interventions.Keywords: dialysis, fistula, nephrology, vascular surgery
Procedia PDF Downloads 113720 Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data
Authors: Sana Hamdi, Emna Bouazizi, Sami Faiz
Abstract:
In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a real-time partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.Keywords: real-time spatial big data, quality of service, vertical partitioning, horizontal partitioning, matching algorithm, hamming distance, stream query
Procedia PDF Downloads 157